Currently, we are producing the first ever generation of this web application. However, if we segment the parts of the web application, then there are many existing solutions to build parts of our application with as opposed to the entire system. Below is an outline of what we looked into, further analysis of this is available further down.
Firstly, we have to build a login system for students, who can login via university details. The options for this include using Azure Active Directory (Microsoft Service) and setup our application through there, alternatively we can use UCL API which is developed by students at UCL. Again the main features of this is a call to the respective APIs in order to gain access tokens so the user can be validated through their service. The advantage for us is that it saves us the effort of setting up a separate database for the login system. What we can learn from this is that there is ready-made code for authentication and it will need to be changed according to the context of the application.
Secondly, we need to setup a 2FA (two factor authentication system) for the online class register. There are a number of articles that describe how to add this to our web app generically. We need to make sure its using Microsoft systems hence the following is suitable for example: Two Factor Authentication Article. This is only one example of the articles we have found that describe how to implement the system. The main feature of this is how the actual two factor authentication works in client and server side. For example, it uses Twilio to authenticate the user by sending an SMS message to them. We made some rough calculations and determined at the scale of UCL that SMS would be far too expensive to do many times a day for each user. Instead we will support TOTP and allow the user to use an application of their choice to store the token.
Even though this article is for Python Django and PHP it is useful to understand how two factor authentication works under the hood. What we can learn is after the user has signed in we need to add another security layer, in the case of this article, describes a time based token which is generated dynamically based on time and the users secret. This secret would need to be generated by our code and then stored in a database safely.
Then the question comes to how do we store data within Sharepoint, which is in fact a completely different framework to everything. This is the only documentation we needed: Sharepoint Link It contained all the necessary information to setup a Sharepoint site as well as get it to store data. So the key features in this are that a Sharepoint site would need to be firstly created, then a Sharepoint list needs to be created, which is essentially what stores the data, and finally to create the dashboard view these objects would need to be displayed to the site page. This was not enough however, as to get data to Sharepoint we needed to gain access to the site itself. We had 2 options for Sharepoint, one was to allow our app access to all of Sharepoint (Microsoft does not allow the app access to only one site) and the other was a Microsoft Flow that automates data from a Microsoft Form to Sharepoint. Microsoft Flow is a process and task automation tool that helps connect different applications and services together. Many of these applications that can be used with Flow are cloud-based, although it is also possible to use Flow in an on-premises environment1. What we can learn using this existing system, is that it is quite easy to automate data to Sharepoint so when we go on to build our app, which will include students filling out forms, it will be appropriate for it to be a Microsoft Form and then the forms data gets sent to a Sharepoint list via the Flow.
Our team wrote an article about implementing single sign on (SSO) and it can be found here .
This web app should support all device types, it's made for students hence compatibility between all devices is necessary.
This section is divided into parts describing programming language, frameworks, libraries, APIs and extras that we have researched for this project. Comparisons have also been made where appropriate (where we allowed flexibility of choice).
The first task was to pick an appropriate language/framework to produce our system in. We had 2 options, one was to use a standard HTML,CSS,JS system and the other was to design a single page application using React. There was a few things to consider in terms of non functional requirements which we would need to take into account.
Below is a comparison of React and traditional multi-page apps:
Conclusively, we have decided to go ahead with a single page app. Firstly, single page apps load faster meaning it would be convenient for users. Switching to a MPA does not mitigate XSS concerns since it is still common to use JavaScript with external dependencies there, it's just that SPA necessitate JavaScript. Furthermore as long as we are careful to limit our dependencies to a few trustworthy packages we shouldn't have too many issues.
Which JavaScript framework to use for SPA? There a few main frameworks/libraries to consider here in terms of what to pick:
React: It is a front-end library that is used for creating user interfaces. It renders its own DOM in the browser meaning its performance is better. Less code needs to be written to do the same amount of work as other frameworks3. Code is split into components which makes it reusable. It is also relatively stable, having gained adoption with many major companies all with a vested interest in keeping react working4.
React can also work with react native which helps in easier development of mobile applications meaning it would be better for further future development of this app.
Node: It is a lightweight server side library. This library is much better to read large streams of data. Node is also highly extensible due to the existing package modules.
Additionally, Node executes outside the browser.5 'It’s a lightweight and efficient JavaScript runtime environment on the server side, powered by the Chrome V8 JavaScript engine, that uses a non-blocking I/O model' 2.
Angular: Components are known as directives. These represent DOM elements that add behaviour once Angular finds them. HTML elements are broken down as component parts and behaves as JavaScript code, making it different to both React and Vue6.
With regards to the information above, we have decided to go ahead with React. Firstly, it is the fastest according to the chart above in server side rendering. Secondly, it is fairly easy to pickup as well according to many sources meaning it is appropriate to use this since 2/3 members have never done web development and we only have around 11 weeks for this project. The requirements are also biased towards front-end development and a few api calls to MS Graphs to produce the required functionalities of the application meaning React would be suitable here. The most important reason is that we can easily incorporate the backend code and some of the front end logic for making a native application for Android and Apple. In our first prototype we plan on making a responsive web-app and then move that to react native during the 2nd prototype stage.
Our client believes our app has potential beyond UCL and he also wants the app to be easily redeploy-able by people without much technical expertise. This presents a somewhat difficult challenge since we are Azure native (our application is running entirely through Azure cloud services) we need to deploy many resources in an automated manner. We also need a configuration file that administrators can edit which will then be reflected in the deployed version of the app.
We spent a significant amount of time researching into Terraform but during development we usually manually ran the commands for the final stages and we eventually converted that manual process into the python script.
Terraform is a program that lets you declare the exact state of the deployed cloud resources and it can then query the Azure API to deploy them. If we ever change the configuration Terraform can handle the incremental updates without destroying and redeploying the resources. There are two providers for Terraform that are designed for deploying Azure resources azurerm and azuread.
We also experimented with Azure Resource Manager (ARM) templates but they have relatively limited functionality and are difficult to automate so further details on this have been omitted.
Although Terraform is great it can't deploy the actual code, only the "slots" where the code can go. Further more we need to customise variables within both the client and server side code some of which can only be known after deployment. This is why we have to deploy Terraform and then upload the server and client code.
The main API we have researched is MS Graphs API as this was really what constituted the requirements. We can make appropriate calls to Graphs which contains user information in order to make a personalised experience for them. This exposes a REST API to access data from MS Cloud services. For example, we can use this to develop the engagement section where we can add a part that gets data from a students calendar and display what events they are signed upto or need to attend. Below is a link to the Graphs API: https://docs.microsoft.com/en-us/graph/overview. Our partner introduced us to the graph explorer which lets us make example API calls to Graphs to see examples of responses we would get. This is particularly useful to ensure you make the right API call to the right resource. Since this app is supposed to be generic, we can define a generic API call to MS Graphs and the only thing we would need during deployment is an authentication token from Azure Active Directory. Again a link to this is here: https://developer.microsoft.com/en-us/graph/graph-explorer?request=me&version=v1.0.
Another important API was the Microsoft Learn API. This was a simple catalogue which contains the details, in a JSON format, of all the modules and learning pathways that Microsoft provided. This was useful as our application would make a API call to retrieve this data and present it to the user in the case where they require assistance in a particular topic. In order to complete the module, students must head over to the actual website which our app would link to. Details regarding the MS Learn API is detailed below: Link to MS Learn Catalog.
The response body of calling this API would look as follows:
{
"modules": [ ... ],
"learningPaths": [ ... ],
"products": [ ... ],
"roles": [ ... ],
"levels": [ ... ]
}
We are particularly interested in just modules and learning paths so the appropriate courses according to the user input can be extracted from there.
A final API that we have researched is LinkedIn Learning. Since UCL already provides free licenses for this to students, its a free resource that our application could surface. Currently, there is not existing code that uses that API. To use the API, we need to provide a client key and secret which we must obtain from the teaching and learning team at UCL. After receiving this, we would need to generate a token (or our app would have to do it during runtime) which would be used in combination with the students search in order to generate a result. A detailed explanation can be found on this link: Link to Linkedin Learning API. This provides a thorough breakdown on how to use the API to make calls and get the data. We would very simply need to make a GET request as below:
GET https://api.linkedin.com/v2/{service}/{resourceIdentifier}
In here we specify the service and resource that we want. For example, we can tell the API to specifically return videos, modules or learning pathways. Unlike MS Learn we cannot cache the data directly into the application as it would not only be too large but impossible. Additionally, due to the way the API works, there would be a fair delay before students would get their search results compared to MS Learn. Below is a sample response of calling the API:
{
"urn": "urn:li:lyndaCourse:563322",
"title": {
"locale": {
"country": "US",
"language": "en"
},
"value": "Measuring Company Culture"
}
}
The response can be filtered in the Linkedin Learning API for the following things:
Due to the number of fields, we have omitted the remaining values but if you would like to know about them please check this link: Response Fields
The research for the QnABot was not too difficult as our technology partner shared with us a simple way to set one up through Azure. If more details are required please check this link: QnABot. Essentially, we would need to setup a few resources including a QnAMaker resource and a web app bot that is connected to a knowledgebase. Then in order to actually update the knowledgebase we would need to find appropriate API calls to add QnA pairings to the knowledgebase. We then found this link: API for QnABot. This API lets us update the knowledgebase of the QnABot but we would need to provide certain parameters. These included the knowledgebase ID, the endpoint and the subscription key. All these can be found from the resources we create in Azure. Very simply, we would need to make a POST request to the QnA API, providing the above things as well as the QnA pair we would like to add. Then, we would need to check whether or not the knowledgebase is ready to be published. Note: Adding a QnA pair does not automatically publish it for production. The adding only performs the training for the bot in the knowledgebase. The publishing of the knowledgebase to the live version would have to be done by another API call. This would be a POST request to the same API but different request URL. This is useful because Microsoft Flows has the power to perform HTTP requests meaning we can setup the flows to perform these API calls directly from a SharePoint list that contains these QnA pairings.
The APIs we have researched have been quite limited as our client, Dean, was very specific with what he wanted and so our research in the API section is short.
To summarise, we have decided to go ahead with a Single Page application, particularly React, for the front end development of our web application. This is because React has a React Native version so in the future a mobile application can be developed more easily. Additionally, the backend store would be a Azure Cosmos DB in order to store user settings and preferences. The storage of user requests (ticketing system) like things they need help in would be a Sharepoint site, so staff can view it and directly respond to the student which was in fact a direct requirement of our client. This automation of data to the Sharepoint list would be facilitated by a Microsoft Flow. Also, we have decided to use the Microsoft Graph API as it is the only appropriate API available to make the application more native for the user (for example retrieve data like their name, meetings etc.). We have also decided to go ahead with using the Microsoft Learn API and Linkedin Learning API to get courses because it would be readily available to students since universities which use Microsoft services are naturally given access to it. We have also decided to make use of the QnABot directly from Azure as it would be a simple bot setup and easily applicable for any university.
1: https://www.contentformula.com/blog/what-is-microsoft-flow-and-how-can-i-use-it
2: https://rubygarage.org/blog/single-page-app-vs-multi-page-app
3: https://medium.com/@OPTASY.com/react-js-vs-node-js-what-are-the-main-differences-which-one-to-choose-for-your-next-web-app-7b07e344e4fb
5: https://stackoverflow.com/questions/56383144/reactjs-vs-nodejs-why-do-i-need-to-create-both
8: https://hackernoon.com/server-side-rendering-shootout-with-marko-preact-rax-react-and-vue-25e1ae17800f