Google Forms is a survey administration software included as part of the free, web-based Google Docs Editors suite offered by Google. The service also includes Google Docs, Google Sheets, Google Slides, Google Drawings, Google Sites, and Google Keep. Google Forms is only available as a web application.
With Google Forms, you can create and analyze surveys right in your mobile or web browser—no special software required. You get instant results as they come in. And, you can summarize survey results at a glance with charts and graphs.
If your website is a static website and you don't want any server cost for your website you can use google form it will save your money and it is easy to implement as well.
Promises are used to handle asynchronous operations in JavaScript. They are easy to manage when dealing with multiple asynchronous operations where callbacks can create callback hell leading to unmanageable code.
A Promise
is a proxy for a value not necessarily known when the promise is created. It allows you to associate handlers with an asynchronous action's eventual success value or failure reason. This lets asynchronous methods return values like synchronous methods: instead of immediately returning the final value, the asynchronous method returns a promise to supply the value at some point in the future.
A Promise
is in one of these states:
JavaScript is a single-threaded language because while running code on a single thread, it can be really easy to implement as we don't have to deal with the complicated scenarios that arise in the multi-threaded environment like deadlock. Since JavaScript is a single-threaded language, it is synchronous in nature.
Syntax-
var promise = new Promise(function(resolve, reject){
//do something
});
var promise = new Promise(function(resolve, reject) {
reject('Promise Rejected')
})
promise
.then(function(successMessage) {
console.log(successMessage);
})
.catch(function(errorMessage) {
//error handler function is invoked
console.log(errorMessage);
});
Reference: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise
An animation lets an element gradually change from one style to another.
You can change as many CSS properties you want, as many times as you want.
To use CSS animation, you must first specify some keyframes for the animation.
Keyframes hold what styles the element will have at certain times.
2s
to slow or speed up the rotation duration and rotate to 360 degrees in animation.<logger name="package.web" level="INFO" >
<appender-ref ref="FILE" />
</logger>
You need to add the console appender.
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<!-- By default, encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder -->
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n</pattern>
</encoder>
</appender>
<logger name="package.web" level="INFO" >
<appender-ref ref="FILE" />
<appender-ref ref="STDOUT" />
</logger>
credit->https://stackoverflow.com/questions/45502073/logback-cant-write-in-console
You may have heard the term algorithm recently, whether it was online or perhaps in some conversation about technology. It's a word that gets thrown around a lot, but what does it mean exactly?
There might be multiple solutions to a problem that follow different steps or there might be multiple algorithms to solve a problem but we should use algorithms that take the least effort, time, and space.
to identify the efficiency of an algorithm we do space and time complexity analysis.
I would recommend you to read more about algorithms here-
I published my news application in Google Play Console and daily news updates in my application
I post news in my application and show it to the user daily I'm the owner of my news organization.
My news organization details below
App Name Dream Temp Prediction
App Package in.technicalkeeda.dtp
Organization name - Technical keeda
Address- Nainital,Uttarakhand 263135
contact details-
email - mahavirsingh7399@gmail.com,
phone number-9354446958
website-https://www.technicalkeeda.in
news Website link -> https://protected-bayou-79974.herokuapp.com/getnews
Source of New collect -> https://protected-bayou-79974.herokuapp.com,My own API, Google news
Last updated: March 17, 2022
This Privacy Policy describes Our policies and procedures on the collection, use and disclosure of Your information when You use the Service and tells You about Your privacy rights and how the law protects You.
We use Your Personal data to provide and improve the Service. By using the Service, You agree to the collection and use of information in accordance with this Privacy Policy. This Privacy Policy has been created with the help of the Privacy Policy Template.
The words of which the initial letter is capitalized have meanings defined under the following conditions. The following definitions shall have the same meaning regardless of whether they appear in singular or in plural.
For the purposes of this Privacy Policy:
Account means a unique account created for You to access our Service or parts of our Service.
Affiliate means an entity that controls, is controlled by or is under common control with a party, where "control" means ownership of 50% or more of the shares, equity interest or other securities entitled to vote for election of directors or other managing authority.
Application means the software program provided by the Company downloaded by You on any electronic device, named Dream Team Prediction
Company (referred to as either "the Company", "We", "Us" or "Our" in this Agreement) refers to Dream Team Prediction .
Country refers to: Uttarakhand, India
Device means any device that can access the Service such as a computer, a cellphone or a digital tablet.
Personal Data is any information that relates to an identified or identifiable individual.
Service refers to the Application.
Service Provider means any natural or legal person who processes the data on behalf of the Company. It refers to third-party companies or individuals employed by the Company to facilitate the Service, to provide the Service on behalf of the Company, to perform services related to the Service or to assist the Company in analyzing how the Service is used.
Usage Data refers to data collected automatically, either generated by the use of the Service or from the Service infrastructure itself (for example, the duration of a page visit).
You means the individual accessing or using the Service, or the company, or other legal entity on behalf of which such individual is accessing or using the Service, as applicable.
While using Our Service, We may ask You to provide Us with certain personally identifiable information that can be used to contact or identify You. Personally identifiable information may include, but is not limited to:
Usage Data is collected automatically when using the Service.
Usage Data may include information such as Your Device's Internet Protocol address (e.g. IP address), browser type, browser version, the pages of our Service that You visit, the time and date of Your visit, the time spent on those pages, unique device identifiers and other diagnostic data.
When You access the Service by or through a mobile device, We may collect certain information automatically, including, but not limited to, the type of mobile device You use, Your mobile device unique ID, the IP address of Your mobile device, Your mobile operating system, the type of mobile Internet browser You use, unique device identifiers and other diagnostic data.
We may also collect information that Your browser sends whenever You visit our Service or when You access the Service by or through a mobile device.
The Company may use Personal Data for the following purposes:
To provide and maintain our Service, including to monitor the usage of our Service.
To manage Your Account: to manage Your registration as a user of the Service. The Personal Data You provide can give You access to different functionalities of the Service that are available to You as a registered user.
For the performance of a contract: the development, compliance and undertaking of the purchase contract for the products, items or services You have purchased or of any other contract with Us through the Service.
To contact You: To contact You by email, telephone calls, SMS, or other equivalent forms of electronic communication, such as a mobile application's push notifications regarding updates or informative communications related to the functionalities, products or contracted services, including the security updates, when necessary or reasonable for their implementation.
To provide You with news, special offers and general information about other goods, services and events which we offer that are similar to those that you have already purchased or enquired about unless You have opted not to receive such information.
To manage Your requests: To attend and manage Your requests to Us.
For business transfers: We may use Your information to evaluate or conduct a merger, divestiture, restructuring, reorganization, dissolution, or other sale or transfer of some or all of Our assets, whether as a going concern or as part of bankruptcy, liquidation, or similar proceeding, in which Personal Data held by Us about our Service users is among the assets transferred.
For other purposes: We may use Your information for other purposes, such as data analysis, identifying usage trends, determining the effectiveness of our promotional campaigns and to evaluate and improve our Service, products, services, marketing and your experience.
We may share Your personal information in the following situations:
The Company will retain Your Personal Data only for as long as is necessary for the purposes set out in this Privacy Policy. We will retain and use Your Personal Data to the extent necessary to comply with our legal obligations (for example, if we are required to retain your data to comply with applicable laws), resolve disputes, and enforce our legal agreements and policies.
The Company will also retain Usage Data for internal analysis purposes. Usage Data is generally retained for a shorter period of time, except when this data is used to strengthen the security or to improve the functionality of Our Service, or We are legally obligated to retain this data for longer time periods.
Your information, including Personal Data, is processed at the Company's operating offices and in any other places where the parties involved in the processing are located. It means that this information may be transferred to — and maintained on — computers located outside of Your state, province, country or other governmental jurisdiction where the data protection laws may differ than those from Your jurisdiction.
Your consent to this Privacy Policy followed by Your submission of such information represents Your agreement to that transfer.
The Company will take all steps reasonably necessary to ensure that Your data is treated securely and in accordance with this Privacy Policy and no transfer of Your Personal Data will take place to an organization or a country unless there are adequate controls in place including the security of Your data and other personal information.
If the Company is involved in a merger, acquisition or asset sale, Your Personal Data may be transferred. We will provide notice before Your Personal Data is transferred and becomes subject to a different Privacy Policy.
Under certain circumstances, the Company may be required to disclose Your Personal Data if required to do so by law or in response to valid requests by public authorities (e.g. a court or a government agency).
The Company may disclose Your Personal Data in the good faith belief that such action is necessary to:
The security of Your Personal Data is important to Us, but remember that no method of transmission over the Internet, or method of electronic storage is 100% secure. While We strive to use commercially acceptable means to protect Your Personal Data, We cannot guarantee its absolute security.
Our Service does not address anyone under the age of 13. We do not knowingly collect personally identifiable information from anyone under the age of 13. If You are a parent or guardian and You are aware that Your child has provided Us with Personal Data, please contact Us. If We become aware that We have collected Personal Data from anyone under the age of 13 without verification of parental consent, We take steps to remove that information from Our servers.
If We need to rely on consent as a legal basis for processing Your information and Your country requires consent from a parent, We may require Your parent's consent before We collect and use that information.
Our Service may contain links to other websites that are not operated by Us. If You click on a third party link, You will be directed to that third party's site. We strongly advise You to review the Privacy Policy of every site You visit.
We have no control over and assume no responsibility for the content, privacy policies or practices of any third party sites or services.
We may update Our Privacy Policy from time to time. We will notify You of any changes by posting the new Privacy Policy on this page.
We will let You know via email and/or a prominent notice on Our Service, prior to the change becoming effective and update the "Last updated" date at the top of this Privacy Policy.
You are advised to review this Privacy Policy periodically for any changes. Changes to this Privacy Policy are effective when they are posted on this page.
If you have any questions about this Privacy Policy, You can contact us:
By email: mahavirsingh7399@gmail.com
By visiting this page on our website: https://www.technicalkeeda.in
There are two type of software Application software and System software
Application software performs specific task for the user.
System software operates and controls the computer system and provides a platform to run
application software.
An operating system is a piece of software that manages all the resources of a computer
system,both hardware and software, and provides an environment in which the user can
execute his/her programs in a convenient and efficient manner by hiding underlying
the complexity of the hardware and acting as a resource manager.
a. Bulky and complex app. (Hardware interaction code must be in app’s code base)
b. Resource exploitation by 1 App.
c. No memory protection.
a. Collection of system software.
- interface between the user and the computer hardware
- Resource management (Aka, Arbitration) (memory, device, file, security, process etc) - Hides the underlying complexity of the hardware. (Aka, Abstraction) - facilitates execution of application programs by providing isolation and protection.
User
Application programs |
Operating system |
Computer hardware |
The operating system provides the means for proper use of the resources in the operation of the computer system.
credit- love babbar
by noreply@blogger.com (Unknown) at February 13, 2022 03:39 AM
Method | Description |
---|---|
getFullYear() | Get the year as a four digit number (yyyy) |
getMonth() | Get the month as a number (0-11) |
getDate() | Get the day as a number (1-31) |
getHours() | Get the hour (0-23) |
getMinutes() | Get the minute (0-59) |
getSeconds() | Get the second (0-59) |
getMilliseconds() | Get the millisecond (0-999) |
getTime() | Get the time (milliseconds since January 1, 1970) |
getDay() | Get the weekday as a number (0-6) |
Date.now() | Get the time. ECMAScript 5. |
by noreply@blogger.com (Unknown) at February 10, 2022 02:28 PM
In this blog, we will be implementing stack using array.
A stack is a conceptual structure consisting of a set of homogeneous elements and is based on the principle of last in first out (LIFO). It is a commonly used abstract data type with two major operations, namely push and pop. Push and pop are carried out on the topmost element, which is the item most recently added to the stack. The push operation adds an element to the stack while the pop operation removes an element from the top position. The stack concept is used in programming and memory organization in computers.
To implement stack using array refer code below-
code source - geeksforgeeks
by noreply@blogger.com (Unknown) at February 08, 2022 11:19 AM
by noreply@blogger.com (Unknown) at January 21, 2022 03:49 PM
There are multiple ways to find prime numbers till N. but the sieve of Eratosthenes is the most efficient solution to find prime numbers till n. This efficient solution often used in competitive programming contests.
step -1 Create a bool vector of size n and initialize it by false value.
step -2 create an empty vector to store prime numbers.
step 3- loop through the visited vector from 2 to n.
step 4- if the current value is pointing to unvisited we will push this value to our answer vector and mark visited it's all multiple till n.
check below image how it works.
Code for most efficient solution Sieve of Eratosthenes in c++
by noreply@blogger.com (Unknown) at December 19, 2021 04:13 PM
Low Powered Models for Disease Detection and Classification for Radiology Images
Project Description -
The aim of this project is to create Deep Learning models for detection and classification of radiology images. The models must be compressed such that they can be deployed to low powered devices like ARM devices, Android devices, etc. Compression techniques such as Quantization and Pruning can be used.
Mentors -
Priyanshu Sinha
Saptarshi Purkayastha
Judy Gichoya
Geeta Priya Padmanabhan
Tech Stack -
Numpy
Pandas
PyDicom
Tensorflow
Tensorflow-Lite/ Tensorflow-Model-Optimization
Docker
Qemu
Project Link - Click here
Commits - Click here
Merge Requests - Click here
Why to do this -
There has been a lot of progress in developing Machine Learning models that predict the medical condition of a patient based upon specific inputs relevant to the diagnosis of that condition. However, these models have drawbacks while deployment in real-time on edge devices. Firstly, they have been trained on high-end GPUs that consume a lot of power and have a lot of computational capacity. Edge devices function on limited power and have a considerably low computational limit. Next, these models are extremely large in size, usually a few hundred megabytes. While training, a large amount of space is available. But the same is not reflected on edge devices having low storage capacity. Healthcare professionals do not have high-end machines available for immediate usage of these models. But edge devices, being low-cost, are easily available. To tackle the problem of model deployment, we use model compression techniques that reduce four factors - power consumption, storage usage, computational cost and latency of detection models in the healthcare category.
What have you done -
For the purpose of this project, 2 datasets were used -
RSNA Pneumonia Detection Dataset
Chest-XRay 14 Dataset
The compression techniques used were -
Dynamic Quantization
Float16 Quantization
Int8 Quantization
Model Pruning
Model Pruning + Dynamic Quantization
Model Pruning + Float16 Quantization
Model Pruning + Int8 Quantization
RSNA Pneumonia Detection -
Two models were trained on this dataset - DenseNet201 and InceptionV3. We achieved the following results in the models’ performance with respect to accuracy and size.
Accuracies comparing original and compressed models -
Size comparing original and compressed models -
Accuracies comparing pruned and quantized-pruned models -
Size comparing pruned and quantized-pruned models -
Chest XRay14 -
Pretrained CheXNet model was used for this dataset from Bruce Chou's Github repository (link in references). The following results were obtained for this dataset.
AUROC Score Comparison between orginial and compressed models -
AUROC Score Comparison between Pruned and Quantized Pruned models -
Model Size Comparison -
How have you done it -
The general pipeline goes like this -
Step 1 - Data Exploration and Cleaning
In this step, we take raw data and explore it. We find out the number of classes, number of data items per class and the general distribution of data points. After deriving these insights, we clean the raw data to get rid of any unnecessary features or data entries. We also restructure tabular data such that it can be fed to the models. This involves steps like creating one-hot encoding of labels, creating extra columns, modifing path variable to redirect to images. In case of images, activities such as augmentation, resizing, shearing, etc are performed.
Step 2 - Modelling
This is the next step in which we initialize data generators that generate preprocessed images and labels in fixed batches. Data is split into train-val-test subsets. Model architectures are initialized. We have used 3 architectures for this project - DenseNet201, InceptionV3 and CheXNet. We also initialize callbacks, checkpoints and optimizers that will be used during training.
Step 3 - Training and Model Evaluation
Here, we train the models till we achieve acceptable performance. The model should neither be underfit nor overfit. After training is over, we evaluate the models. We evaluated DenseNet and InceptionV3 trained on the RSNA Pnemonia Detection Dataset based on accuracy. This is because the models directly output the class of the input image. CheXNet trained on Chest-XRay14 dataset was evaluated based on AUROC score because the output was not a fixed class but a class probability score. We also record the size of this original model.
Step 4 - Model Pruning
In model pruning, we trim the unnecessary connections in the neural network. here, I have used Polynomial Decay as the sparsity function. Pruning starts from 50% and goes upto 80% of the total weights in the model. After this, we remove the excess connections and compress the layers of the neural network. This model gets saved in a .h5 format.
Step 5 - Post-training Quantization
After the models are trained, we quantize them. This is done using Tensorflow Lite Converters. There are 3 types of quantizations that we are performing in this project - Dynamic, Float16 and Int8 quantization. We initialize the converter as per our requirement and pass the pre-trained or pruned model to it. The output is a quantized model in the form of a TFLite FlatBuffer. We evaluate the quantized models based on accuracy/AUROC score (as per the original model) and size.
Step 6 - Inference
For performing inference, using normal .h5 models, we use the model.predict() fucntion. While using TFLite models, we initialize interpreters that will set the input and output tensors. We invoke the interpreter on an input image and retrive the output tensor returned by the interpreter. The inference script was run for all models - original, pruned, quantized and hybrid.
Future Scope -
1. Testing these models on actual hardware such as Raspberry Pi and Android phones.
2. Compressing object detection/segmentation/UV based models.
3. Creating a UI to serve these models on the frontend.
References -
1. https://www.tensorflow.org/lite/performance/model_optimization
2. https://youtu.be/3JWRVx1OKQQ
3. https://youtu.be/4iq-d2AmfRU
4. CheXNet - https://github.com/brucechou1983/CheXNet-Keras
by Aishwarya Harpale (noreply@blogger.com) at December 05, 2021 11:52 PM
In various low/middle-income countries children die in just a few days after their birth due to the inadequate supply of facilities that they require. The important fact is that all of these deaths are Preventable and can be prevented by providing proper knowledge to the mothers of the babies and other people and by providing a proper tracker facility to track the health status of the baby in the early days of his birth.
As a sort of solution MHBS applications are launched which is a set of 4 applications. Out of which I had worked on the mhbs-trainer and mhbs-tracker. Tracker is used for data collection which is built from the dhis2 mobile application and trainer is used to access the resources.
The goal of this project is to develop the scale-up version of the existing mHBS application, updating the old code base and adding new features, providing a feature to access media resources uploaded on the dhis2 through the trainer app which will be used for the training of the individual.
Objectives Of The project
seconds
data element for OSCE B.GSoC was one of the best experiences that I ever had. There was an immense amount of knowledge to be gained and it taught me how great minds from different parts of the world work to make a product alive.
Thanks to my mentors who always came to my rescue and guided me throughout the journey. My coding standards have greatly improved and my experience with Android has been great.
The part I enjoyed most was the new technologies with which I got exposure through this program.
I would love to contribute to mHBS Applications in the future and guide the new enthusiast and pass on the knowledge I gained to them.
by noreply@blogger.com (Bhavesh Sharma) at August 29, 2021 06:08 PM
by noreply@blogger.com (Unknown) at August 28, 2021 04:31 AM
Week 10 Of coding period is completed, I worked on adding Creating sphinx Documentation, Splash Screen, App Icons, Updating program rules, enter server textfield Created Documentation using sphinx Deployed documentation on Gitlab pages Created App Icon Created Splash Screen Updated metadata Created CI script to build documentation Added link to docs in README.md
https://darshpreet2000.gitlab.io/lh-mhbs-eceb/
Splash Screen | App Icon |
---|---|
![]() |
![]() |
No
Week 10 Of coding period is completed, I worked on adding Creating sphinx Documentation, Splash Screen, App Icons, Updating program rules, enter server textfield Created Documentation using sphinx Deployed documentation on Gitlab pages Created App Icon Created Splash Screen Updated metadata Created CI script to build documentation Added link to docs in README.mdHey Everyone, I am back with my weekly updates about my GSoC journey😀. This amazing journey is about to its end guys, but really I never forget this great experience where I learned a lot of technologies & experienced a new tech side of android development in just a few weeks.
This week I had a meeting with my mentor where I discussed some of the UI-UX related stuff that can make the user experience much better. Hence I have improved the UI of the app, uploaded educational resources over dhis2, find out some very important bugs that need to be resolved in the scale-up version. Let’s discuss them one by one -
To Classify Documents as :
PDF — pdfMimeTypes = ["application/pdf"]
Videos — videoMimeTypes = ["video/ogg","video/webm","video/.webm","video/.ogv","video/mp4", "video/.mp4","video/.m4v","video/x-flv",
"application/x-mpegURL","video/MP2T","video/3gpp", "video/quicktime","video/x-msvideo","video/x-ms-wmv"]
Everything is working fine and in the last few days back I updated my phone and as usual, I keep on changing the code to come up with a better solution. But after some time everything is like 😶.
Suddenly my app gets on crashing on my device and does even not get opened. I was completely clueless why it is happening where the problem is? what I had changed 🤔.
After a long time, I identified the great culprit i.e. cordova-plugin-secure-storage which is begin used in our trainer app to store the user credentials. After doing a google search, I had to find out its cause. Some of the important points regarding this issue and its fix are :
cordova-plugin-secure-storage
to cordova-plugin-secure-storage-echo
We need to upload the real educational resources in the dhis2.
Resources: https://globalhealthmedia.org/videos/
It is having a lot of media files. This week I had uploaded 60+ Media files on the dhis2 and all of these media files are being accessed through the trainer app.
It was an idea that it is better if possible to open every media file within our app. Currently, we use the cordova-file-opener2 plugin for this purpose which gives access to the supportive apps to open the file. But the interesting fact to note is that any of the apps do not have access to check the media files and play even they are not visible if we find them on the phone or any supportive app. All the media files are downloaded and saved in users internal storage in a Persistent manner and well encrypted so any of the media files can be open only from the trainer app.
But I checked for the other options -
by noreply@blogger.com (Bhavesh Sharma) at August 16, 2021 04:40 PM
WEEK 3:
Hello everyone. As dicussed in the previous blog, this week was to be utilised for creating the Appointment workflow in the React application.
Surpisingly, the task hardly took 24hrs for completion, henceforth by the end of Tuesday I was done with this week`s work 😄.
Anyways there were a few bugs in test files that were to be fixed, so I utilised my weeks fixing some web-component tests.
here are the screenshots of the workflow:
Appointment workflows
see you next week 😄
WEEK 3:
WEEK 2:
Hello folks! Another week done and another batch of code written and merged !! In the coming sections I will be explaining the details of the work done this week.
In the previous week, I had implemented the checkin workflow in the react application. According to my plan in the proposal, this week was to be used to create components for the resources that will be used in the Appointment workflow.
the resources are:
In the beginning of the week, I started looking into the lit
: upon which the components are based. I read the documentation and decided how to create the components so that they can be utilised for different CRUD operations.
I also noticed a few bugs in the components while implementing the checkin workflow, for this I created a MR and fixed them :
By wednesday I started creating the components. Most of these components have a reference
or codeable-concept
property datatype, hence creation of components that can be used in different contexts to implement the mentioned datatypes will help a lot.
By the end of wednesday I had created these datatype components :
Once these components were created, the resource components were not difficult to develop. By thursday I had created Slot-Schedule and by friday Appointment.
During this period my mentor noticed a few bugs in the project:
So along with the development of the components, I made a couple of commits for these issues as well.
As a component will be used for CREATE / GET / PUT request , I made it compatible with everything.
For the create part, inside the constructor an initial empty skeleton value was provided. This value property would be overwritten if the component contains the value attribute. Hence this supports both the cases of create and get operations as a user can avoid the value attribute if the intention is to use the component for creating entries and provide a value after fetching a specific endpoint if the intent is to show entries.
Moreover even if a value property is provided and some of the keys are missing, the component will still remain the same as I have done a check for individual property existence and default value allocation as well, this feature will be helpful for the edit operation.
here is the default value property set inside the Appointment component constructor. Similar structure is used in the other 2 components.
1 | { |
here are the screenshots of the implemented components:
thanks for reading till the end ❤️ , see you next week 😄
WEEK 2:
Hello folks! Another week done and another batch of code written and merged !! In the coming sections I
WEEK 1:
Finally the community bonding priod is over and the coding period has commensed. In this blog I have loads of content and code to cover!
ANALYSIS OF COMMUNITY BONDING PERIOD
In the community bonding period there were mainly 2 objectives we had set :
Most of my community bonding period was spend on the CI/CD part. The major cause of this was
cross-browser, cross-platform testing integration with React
. The tool which was planned to be used was saucelabs and unfortunately for me 😕 there was no proper content regarding saucelabs integration with react to be found anywhere, I tried to make things work on my own and nearly succeeded testing on jest with snowpack
and react, but snowpack could not resolve the dependencies inside the lh-toolkit-webcomponent
monorepo when it was caching the files.
Snowpack serves the application unbundled during development. Each file needs to be -built only once and then is cached forever. When a file changes, Snowpack rebuilds that single file
As mainly we will be covering unit and integration testing, saucelabs will not be a neccessity as suggested by the saucelabs community. So I went on to setup the react app
Design part was smooth sailing and offered many insights that I might have not noticed until later, upon the suggestion of my mentor I have also documented the api endpoints. I will drop the file with the designs and the endpoints here : https://geforce6t.github.io/blog/categories/EHR/
PROGRESS MADE IN THE WEEK
the plan for the first coding week is to do the following:
monday
: I started coding 🎉️ , and completed the first part of the plan.
tuesday
: I started working on the project layout and accomplished the following :
wednesday
: looked for ways to style the lit-components………, lit components uses shadow dom and the style encapsulation feature make it a little tricky to style custom components from the outside, I made some hacks to do the same using callback refs inside the react component where the lit component resides,but this approach would not be optimal.
Probably some minimal styles could be applied inside the lit components and would do great, anyhow I was able to style the material components using the global material css variables.
One of the things that I need to look into is to add some style to the lit-components as a whole : “margin, border shadow and stuff”, this will not be a big issue as I have a lot of other components to create so mainly the style will be global and can be done anytime.
So here is how the create-patient component has changed visually:
compared to above, this is what we have as default:
thursday
: created the general search component.
the general serach component is something that I would like to talk about. The general search component was actually little difficult to implement. For different cases the search params, columns, query parameter may differ so to create a general purpose search component all of this features must be passed as attributes which was done in the implementation,
the functions used in the seach component are
the most difficult part about the implementation was filtering results to form column enteries, the values that we may want can be nested one/many levels so it is very important to provide a query string to execute this filtering, it is not possible to pass this query string via props as it is inside a map method, so I created a function with switch case for different workflows that we will implement.
finally the components looks like this :
friday
: wrote the edit and show patient code to complete the layout of the checkin worklfow 🎉️, but there was one issue that came across while creating the put request, for some of the fhir components the values are not changing even after visually their value is changed, this might be an issue in the react app or the wc repo. I will take a look and solve this issue in the coming days!
UPCOMING DAYS
value not changing
issue while making a put request.thanks for reading till the end ❤️ , see you next week 😄
WEEK 1:
Finally the community bonding priod is over and the coding period has commensed. In this blog I have
Hey, Reader's we are in the 10th week of the coding period.
The 10th week started on August. This week I write test file for web app.
In the 10th week of the coding period, I have done the following thing with the cost of care web application.
1-Write unit Test file for web app.
2- send I MR.
by noreply@blogger.com (Unknown) at August 14, 2021 01:35 AM
Hi, We are almost done with the implemention of the proposal. The additional work is to make improvements and fixes. According to the feedback of the UI/UX designer from the team, the following improvements should be made to the webcomponents:
In the coming week these pointers will be implemented along with documentation and other fixes.
see you next week 😄.
Video Server OpenCV pipeline for both segmentation and object detection was completed. The Mock HMD VR designed using Unity XR refers to a remote location on the server for getting video output.
Apart from the server design. documentation was completed for WebXR approach. For UnityXR based method few sections of documentation is still pending. For loading model for inference in OpenCV pipeline Frozen_Graph approach is used.
The video server generates close to 5200 images from a Video of length 3 minutes and 29 seconds. The number of images generated depends on the frame rate used. The frame rate has to be optimally adjusted according to the system configuration.
Similarly, the video output of the program depends on the frame rate selected. The frame rate changes video length.
Sample Model Inference :
That's all for today!!!
Hope you had a great week
Tot ziens
Hey Everyone, this week I have fixed the tracker app pipeline that failed last week, exported the tracker and trainer specific metadata, tested them by importing them to the play servers and a lot more. Let’s discuss this in detail.
Last week when I submitted an MR that passed the pipeline and was merged by my mentor into the main repo of the tracker app but it gets failed unexpectedly.
The interesting fact to note is that it ran successfully in the commit with which I had submitted but the merge commit that my mentor made to merge it which doesn’t contain any new changes, and is only having the commits that I made failed the pipeline. I didn't get any reason for it.😌
But in the end, I made it find out the cause of it and fix it😀. I think it is due to the NDK version. I suggest if you are building it locally as well please try building after installing NDK Version 21.0.6113669
Related MR: MR-15
I had exported tracker specific metadata so that one can import them in his own dhis2 instance and can get benefitted from the use of the mhbs — tracker app.
I have exported 2 mHBS Tracker App-specific metadata-
To make both of the metadata importable I had deleted the old references from the metadata like “user”, “lastUpdatedBy”, “organisationUnit ”keys from the corresponding JSON. There are more than 4k+ such instances in the HBB Program metadata as it was heavily used by the bmgfdev instance users.
Now the metadata can be imported into the new dhis2 instance easily.
Corresponding Issue : Issue #22
Corresponding MR : MR-16
Discussion Thread : Link
I had tested the import of the metadata over Dhis-2 Play Severs with all data elements and attributes. There is 2 issue that my mentor suggested me to test with the export and import related to HBB Survey are also tested and can be closed as a FIX.
by noreply@blogger.com (Bhavesh Sharma) at August 07, 2021 08:18 PM
Hey, Reader's we are in the 9th week of the coding period.
The 9th week started on 30 July. This week I completed the Hospital Rating feature.
In the 9th week of the coding period, I have done the following thing with the cost of care web application.
1-Completed Hospital Rating feature.
2- designed hospital rating UI.
3- Displayed Comparison data into the table.
4- send I MR.
by noreply@blogger.com (Unknown) at August 06, 2021 11:42 AM
Week 9 Of coding period is completed, I worked on adding notification count, About app, Share app feature, Load more notifications functionality
I created notifications count in bottom app bar, It displays the count of new notifications in app, After fetching data from dhis2, app check if it has the message id , if it doesn’t have the id then count is incremented.
When user visits the profile page then count becomes 0 as the messages are read by the user now.
It displays about the project with a button to visit the project, This screen will be useful for promoting LibreHealth organization to users who are using this application.
When user click on load more button then it fetches the next 5 notifications, It calls api with next page number & save the data to hive storage.
No
Week 9 Of coding period is completed, I worked on adding notification count, About app, Share app feature, Load more notifications functionalityHi, As discussed in the last week’s blog, Week 8 was to be used for the following tasks:
The above mentioned points were covered this week.
visit-provider workflow : this workflow was very much similar to the visit nurse workflow that was implemented last week.
Out of these the first 2 points were already implemented in the visit-nurse workflow.
The rendering of different screens is done using redux, each click when required changes a redux state that renders a different screen.
here is the reducer for this workflow:
The editActivePage
reducer is for changing the screen. The observation results shown require the Patient Id hence the Id is stored after the provider selects the patient.
All of these screens are a part of a custom High order Navbar component that renders different tabs based on state values.
here is the navbar component :
1 | import React, { useState, useRef } from 'react' |
The e-prescription workflow is almost similar to the visit-provider workflow, other than 1/2 screens.
see you next week 😄.
Prototype for immersive was designed in Unity using inbuilt XR plugins. For HMD emulation MockHMD was used. The material was rendered on the plane. The material was linked to 2d texture. The VideoPlayer asset in Unity helped in converting the frames to textures. The camera was bounded to the camera space. There were no performance issues while development and testing. The asset used was present locally, asset fetched using URL can also be used.
Segmentation Model from previous week, crashed despite efforts. So, I created a UNet from scratch. Basic model performs well for sample space. Hyper-parameter optimisation is pending. For 300 epochs a dice coefficient of 0.84 was achieved. This UNet model works with openCV. For developing the OpenCV segmentation pipeline only the model inference and drawing portion has to change. Inference takes close to 1~2 seconds for 512x512x3 image.
That's all for today!!!
Hope you had a great week
Farvel
Hello Everyone, I am elated to share that project is going good and mostly all the issues are being sorted in the GitLab. This week I had tested the workflow of the apps, fixed few bugs and updated Readme and added some demonstration videos to help in the use of mhbs apps.😀
I have verified that there is a functionality in the mhbs Tracker app to mark the existing enrollment as complete and even there is also an option to re-open the completed enrollments as well.
To understand the working please check this.
I have checked the second data element presence and working in OSCE B form. I had tested few other things as well. The data elements that I had tested and present in OSCE form are :
Link of the OSCE B Form:
Link Check_OSCE_B_form_of_already_existing_event
All the features of Module_1_mHBS_General_User_Guide_JANUARY_30_2019_FINAL.pdf are being tested and present in the new tracker app.
But the new UI is entirely different and features may not be located in the place where they are shown in this file. I am thinking of making a new user guide for the tracker app and will discuss that regards with the mentor.
I had checked the frequency of the existing HBB Survey and it is as per our need but mentors might want me to check the same for exported metadata. Till now I had not completed the work of exporting and importing metadata when it gets completed it will be fixed at that time.
Similar to the above issue, I had checked its functionality with the existing HBB Survey and it is fine. We should also need to check it with exported metadata as well and will do that when exporting part gets completed.
There is a slight modification being made in the regular syncing of media files as per discussion with mentors.
Previous Approach: Whenever the user clicks on any media file and if he is connected to the internet we will re-download the file and save replace it with its old locally saved copy. It was done to make sure that we always have the latest and correct file.
Problem With the Previous Approach: There would not be regular updates on the media file but whenever the user checks that media file it will get downloaded in the background and consumes a lot of the user’s data.
New Approach: As per discussion with the mentor we had decided that we will never update the existing media file if we need to update this file then we will remove it and upload a new copy with the same name in this way we never need to download the file again and again. However, you can change the name of an existing file, add a new file, remove old files but not updated the media content of the existing file.
As per the above discussion, I had made changes to the trainer app in order to stop downloading media content again and again. There are few bugs that I encountered during testing in the trainer app and has been fixed.
Committed : b8f6403d
Related Issue: Issue #14
I had updated the readme of the trainer app regarding all these logic and added resources like
Demonstration_of_Uploading_resources_on_dhis2
Demonstartion_of_accessing_resources_on_trainer_app_through_tracker_app
Committed : 8483042a
Related MR: Link
2. Fixed Issue #14 — Modify Syncing Logic for Media files.
3. Updated Readme — Added Documentation demonstrating to access media resources through trainer app and upload resources through dhis2.
Committed 8483042a
4. Related MR: Link
5. Small clips to depict the use of the media page.
Demonstration_of_Uploading_resources_on_dhis2
Demonstartion_of_accessing_resources_on_trainer_app_through_tracker_app
Export and Import of metadata and GitLab CI failed last week.
I want to discuss them with my mentor and want to get fix them.
by noreply@blogger.com (Bhavesh Sharma) at July 31, 2021 01:43 PM
Hey, Reader's we are in the 8th week of the coding period.
The 8th week started on 23rd July. This week I was working on compare hospitals with ratings.
In the 8th week of the coding period, I have done the following thing with the cost of care web application.
1-Started working on compare hospitals screens.
2- extracted JSON data from compare hospitals Excels file.
3- fixed pipeline.
4- completed pending task.
Week 8 Of coding period is completed, I worked on adding on call doctors schedule showing screen, I created stage 5 assessments capture feature
Tasks
Hi, As discussed in the last week’s blog , Week 7 was to be used for the following tasks:
This week I was a little busy with some academic work, hence all of the above pointers were not possible, Hence the last 2 pointers will be shifted to the upcoming week. (WEEK 8)
This week I have mainly worked on completing the visit nurse workflow. As the vitals signs and medication statement components were merged, the react-ehr application was updated to include the screens which use these components in the visit-nurse workflow.
Another interesting thing is that most of these screens are having almost a similar structure, hence there is a possibility that a single component can be created as a layout component for different screens.
The dosage component seems to cause a build failure, which will be fixed in the coming week.
reiterating the pointers from last week’s blog for the implementation points of the visit-provider workflow:
the pointers for the implementation points of the e-prescription workflow:
see you next week 😄.
Tested 3–4 Issues
I had go through almost all the issues of the tracker and trainer app. I had tested issues 4 issues marked with testing label and prepared their solution. I will be discussing my approaches for these issues with my mentor and fixed them in next week.
Fixed mhbs Logo Issue Issue #4
In the tracker app some of the screens have the dhis-2 logo. It is now fixed to mhbs logo.
Fixed Issue #4
Related MR : Mhbs logo fixed
Updated Trainer App Readme.md
Since we had added media page, offline support, app usage tracker in trainer app we want a small documentation details to be added to the readme for further reference. I had done it -
Committed here ( 718f8756 ).
Fixed Bug in GitLab CI
There is a bug in GitLab CI code of trainer app. Gradle is building the apk correctly but it is not assembling the artifact correctly. It is my mistake what I had made earlier. I had corrected it.
Committed f01c6cf9.
by noreply@blogger.com (Bhavesh Sharma) at July 23, 2021 06:22 PM
The video object detection pipeline is designed using openCV. The videos are loaded and a frame rate of 25FPS is assumed. Using the frames the tf.session is initiated and TF Object detection API model is run. The resulting bounding boxes are drawn using openCV.
Resulting images with bounding boxes are written to video with 1FPS using openCV videowriter. After that the generated video can be served using data or blob url depending upon the video so generated.
The segmentation model is not working properly for most of the cases. The obvious solution for it would be to retrain the entire model. The segmentation model has to be implemented in the same way as the Object detection using openCV.
At present the entire video generation pipeline takes less than 40 seconds. This excludes the uploading time and downloading time. The server on which it was tested has a RTX 3060 GPU peak GPU memory utilisation reached upto 11GB
That's all for today!!!
Hope you had a great week
Annyeong
Hey, Reader's we are in the 17th week of the coding period.
The 7th week started on 16th July. This week I was working on web application nearby hospital feature.
In the 7th week of the coding period, I have done the following thing with the cost of care web application.
1-Added Nearby Hospital to the Home screen of web app using the overpass api.
2- UI change of Home Screen.
3- added search by address and name filter to hospital.
4- sent 1 MR.
Week 7 Of coding period is completed, I worked on adding on call doctors viewing feature, I completed notifcation showing functionality on risk assessments and monitoring alerts
GIF Showing on Call Doctors slider
Test Cases
Hi, According to the proposal timeline, week 6 was to be based on:
Fortunately, there was no pending work, hence the focus was mainly on the remaining points.
The tasks of week7 are to:
As most of the components required for implementing the visit-nurse workflow are already present in the web-component library, I started working on it.
The visit-nurse workflow consists of many screens and multiple components.
Some of the required components are not merged in the main repo yet, hence respective tab is unimplemented.
Each worklow in the application is a different page with a unique url. Hence the use of redux in immense in the project.
Each workflow also has a dedicated reducer which handles all the operations related to a specific workflow without editing any other state.
Inside the project’s src directory, the workflow directory contains all the code distributed into separate directories for different workflows. This makes it very simple to create / edit or even delete a workflow without affecting other parts of the application!
1 | import React from 'react' |
This is the main component that is exported. Based on different redux state values different screens are displayed.
All the implemented screens and the workflow can be seen here:
For the visit-nurse workflow, the medication statement and the vitals tabs are left to be implemented, these must be completed within the next week.
There are also some additions to be made in the webcomponents : some fields like encounter and subject (patient) are required to be added to the allergy and medication statement components.
The visit-provider workflow has similarities to the visit-nurse workflow, some of the screen are similar for both, hence the provider workflow should be completed faster than the nurse workflow.
these are the implementation points for the visit-provider workflow:
see you next week 😄.
Hi! We have a small change in the implementation plan. As per the proposal this week => week 5 was meant for implementing the visit workflow in the react / EHR application. The implementation required some new components, which were created last week as per the plan, but the Merge Requests are not merged yet. As there can be multiple changes and modifications required after the MRs are reviewed, It would be better to wait till the components are merged after modifications if required.
Therefore this week was based on doing the tasks of the 7th Week as Week6 is planned for improvements and bug fixing.
Primarily this week was spent creating the last set of components that will be required for the workflows planned.
The following components were created -
These components will be used for e-prescription workflow, although the dosage backbone element can be used for other Medication workflow based resources as well.
The pattern used for creating these components is same as the one used for other components.
Here is the screenshot:
see you next week 😄.
The main focus for this week was correcting the errors in the bounding box drawn. The bounding boxes differed from the ground truth. That was because of issue regarding the canvas context drawing setting and not related to the actual model.
After the correcting the drawing steps in the canvas context. The bounding box are drawn somewhat similar to the ground truth.
Second goal for this week was development of a remote server based object detection video api. I successfully mad it work for single image but beyond it the server started to hang-up.
The video encoding and decoding process are particularly stressful for the CPU. Throughout the development time the CPU was under 96~97% utilisation running at maximum clock speed. This was the case for a single instance, but if multiple instances are launched the server crashes instantaneously. For running the frozen inference model GPU was used. Under single gpu setup the gpu was under 100% memory utilisation.
Apart from this network latency spikes up during the encoding and decoding process.
That's all for today!!!!
Hope you had a great week
Adios
Classifying as Danger after stage 2 assessments | Classifying as Problem after stage 2 assessments |
---|---|
Test Cases for stage-2
Test Cases for Classification Repository
To show summary of 24 hours, We need to use lastUpdatedDuration=1d (1 day) as parameter in API to fetch all the events happened within 24 hours.
This has 3 types
Hey Everyone, I am back with my weekly blog. Let’s have a look at work done in the 6th week of this amazing journey.
Reimplemented and Completed App Usage Tracking System
Guys actually as stated in the last blog I had completed the app usage tracking system but on discussing with the mentor We come up with a better approach and reimplemented the completed system. There are not very huge changes we just did some changes in the dhis2 storage part. Let’s have a look at how we are tracking the trainer app usage now -
Related MR: Added System to track app usage (!196) · Merge requests
Passed Pipeline: #338127000
Modified mhbs-tracker app to send login credentials
Since we want the user to enter credentials only once for both the apps and trainer app to be launched from the tracker app. Therefore the idea is, tracker app will share the login credentials with the trainer app on each launch. I had saved the login credentials of the user in secured Shared Prefs and on launching intent of trainer app passed these credentials so that trainer app to process further.
Related MR: Passed user credentials to trainer app for login & Fixed Some Warnings · Merge requests
Resolved Issues: #16
Passed Pipeline: #338443846
Removed Fabric usages
The fabric plugin is depreciated and we need to remove its instances from the app.
Committed here.
Other Works
I am not clear with these issue so will try to discuss these issue in the weekend so that can start work from Monday.
by noreply@blogger.com (Bhavesh Sharma) at July 16, 2021 05:41 PM
Hey, Reader's we are in the 6th week of the coding period.
The 6th week started on 9th July. This week I started working on the Cost of care web application.
In the 6th week of the coding period, I have done the following thing with the cost of care web application.
1-added overpass API.
2- Display overpass API result like hospital name, distance, and no of beds available same as flutter application.
This week I was busy with college work. That's why I did less work as compared to the previous week.
Find Minimum in Rotated Sorted Array II-
Suppose an array of length n
sorted in ascending order is rotated between 1
and n
times. For example, the array nums = [0,1,4,4,5,6,7]
might become:
[4,5,6,7,0,1,4]
if it was rotated 4
times.[0,1,4,4,5,6,7]
if it was rotated 7
times.Notice that rotating an array [a[0], a[1], a[2], ..., a[n-1]]
1 time results in the array [a[n-1], a[0], a[1], a[2], ..., a[n-2]]
.
Given the sorted rotated array nums
that may contain duplicates, return the minimum element of this array.
You must decrease the overall operation steps as much as possible.
Example 1:
Input: nums = [1,3,5]
Output: 1
Example 2:
Input: nums = [2,2,2,0,1]
Output: 0
Constraints:
n == nums.length
1 <= n <= 5000
-5000 <= nums[i] <= 5000
nums
is sorted and rotated between 1
and n
times.
Follow up: This problem is similar to Find Minimum in Rotated Sorted Array, but nums
may contain duplicates. Would this affect the runtime complexity? How and why?
question link -https://leetcode.com/problems/find-minimum-in-rotated-sorted-array-ii/
Hello Everyone, sorry for getting delayed in writing this blog. Last week's work is as follows:
To track the app usage I had followed these steps-
Now we can send data for each and every page to dhis2 in the Tracker program.
2. Created Local SQL- DB table to store app usage
3. Idea of Threshold Limit for app usage
All of the above-discussed flows are integrated and set up. But recently we come up with a better approach suggested by the mentor, so I will try to implement that one and send the MR instead of this approach.
by noreply@blogger.com (Bhavesh Sharma) at July 12, 2021 12:13 PM
The main target for this week was POC development. I was successful able to develop a POC with help of HTML5 Canvas and WebXR. The total inference time is about 25 seconds on a Snapdragon 870.
The previous week's model had inherent Frozen-Graph Operations issue because of which is did not perform well when used with a larger dataset.So, I had to re-train the model. The newer performed well.
For Bounding Box generation, I have HTML5 canvas where in I draw the rectangle with the help of context2d. Total time for bounding box drawing and model inference is about 25 seconds for 3 min clip.
The video clip tested contains the images which are part of the segmented images obtained from the kavsir dataset. For joining the images I have used a software called as clideo.
The POC works well on Android phones without any major performance issues.
That's all for today!!!
Hope you had a great week
Au Revoir
References
Test Cases
By Birth Time | By Status | By Location |
---|
Search Oni | Search test |
---|
Hey, Reader's we are in the 5th week of the coding period.
The 5th week started on 2nd July. This week was exciting and I enjoyed this week's task very well.
In the 5th week of the coding period, I have done the following thing with the cost of care flutter application.
1-Added address field To download CDM screen.
2- search by address and hospital name to download CDM screen.
3- added text when no bookmarked item left.
4-UI modification.
I really enjoyed the 5th week of the coding period and I gained a lot of knowledge from it.
Now we are going to enter in First evaluation period (12-16 July ).
see you next week.
Advancing into the 4th week, everything is going as per the plan. By the end of this week, most of the components planned are already created.
This week (4th week) was utilised creating components for the visit workflow.
The visit workflow primarily involves the nurse and the provider along with the patient, the nurse checks the arrival of the patient, takes the vitals, verifies and creates medications and allergies.
Provider then checks the patient and performs exams and orders lab tests if necessary.
FHIR does not have a visit resource so every operation related to a visit event is handled by the modified encounter resource.
In FHIR, Appointment is used for establishing a date for the encounter,
When the patient arrives and the visit is about to start, then the appointment will be marked as fulfilled, and linked to the newly created encounter.
The following components were created for this: (other components which will be required apart from the following mentioned components are already created)
The web component repo already had components for allergy and observation, but their utility was limited, moreover for every property there was a components which was actually not required as using created datatype components all the properties of any resource could be included in a single component.
A definte pattern is used for creating these components. The pattern is demonstrated below:
1 | /*imports required*/ |
Here are the screenshots:
—
—
see you next week 😄.
This week was an eventful one. I started the week with TF Object Detection model development. I trained two models one was Faster RCNN and another one was MobileNet SSD. I plan to train YOLO models too and convert them into ONNX format for JS inference.
For VR the bounding boxes posed a lot of problems . For rendering the bounding boxes I used HTML5 Canvas and WebGL for designing and rendering the bounding box. The Dataset for Segmentation has only one class so class labels are not rendered on top of the bounding box. The model inference is still slow and in some cases fails to render the bounding boxes.
POC for Object detection is not working properly at this juncture. It still requires few more corrections for improving the model performance.
That's all for today!!!
Hope you had a great week
Vale
Hello everyone, do you want to know what is the progress of our project?😏
Obviously, I am here to share that with you. So let’s have a look at the work done in the fourth week of this amazing journey🧐-
This week I had set up Gitlab CI for our trainer app, Completed the offline support for the trainer app using SQLite database, added syncing UI in the app to differentiate the downloaded content, testing 2–3 issues that were mentioned in the last blog that they are resolved or not. Wait guys let’s discuss all this in detail😀-
Without CI/CD it difficult to check whether the particular code will build properly or not and we have to think about a local clone of the project or any other option to get its latest apk. To solve this I had set up the CI/CD Pipeline as well with the MR sent last week and in the one sent this week also.
Resolved Issues: #112
committed here - ce83405a
Last run Pipeline: #327895681
CI is also added to offline-support MR.
Initially, in the last MR documents on the media page and every media file get downloaded and then played there is not any support for storing any type of data locally to the device to make it available offline.
Therefore I had added an SQL database to the trainer app to store data. This week I had completed my pending work related to this which was just started last week.
Now the updated list of documents get saved to the local database whenever a user visits the media page with an active internet connection and if a user visits this page in offline mode even in that case he is able to use the application in the same manner as it is in online mode, in this case, it is fetching data from a local database.
When a user clicks on any media file if it does not download it takes some time to get downloaded the first time and show a progress UI. Let’s suppose if it is already downloaded in that case it gets played instantly without waiting for a newer version but every time when user clicks on the media either it is already downloaded or not, it hits the API to update the media file. In offline mode, nothing update happens and it played the last cached version stored in the device and in online mode, it instantly plays the last cached version and hit the API to listen for any updates on this file.
SQL Database Schema & Logic
2. Logic for storing a list of documents of media page-
(1) DELETE : Document ids not present in the new list but present in our local DB. These documents are deleted from dhis2 and need to be deleted from local DB as well.
(2) UPDATE : After completing the first step, local DB is left only with those ids which need an update. We just update the particular row of their ids of local DB with whatever new corresponding data we received.
(3) CREATE : Now those document ids of a new list that are not passed through step 2 are one of those of newly added documents. We just need to create a new document with their data in our local DB.
These steps will take care of all cases for making a list of documents available offline.
committed here: e8b93759
3. Logic for offline support of media files-
committed here: 65ddad02
Resolved Issues: #69
Corresponding MR: !195
Pipeline: #329574660
I had added 3 types of sync symbols ( Pending, Completed, Error ) in the media page list.
I had also added a progress bar that shows the ongoing sync state.
committed here: bf49a629
There are some issues whose testing I have to do last week. Let’s discuss them one by one-
2. Role-based restricted access for viewing tracked entities ( #44 )
3. Need to create a new program stage in dhis2 ( #10 )
2. Working on Issue #44
3. Working on Issue #10
by noreply@blogger.com (Bhavesh Sharma) at July 02, 2021 05:51 PM
Hey, Reader's we are in the 4th week of the coding period.
The 4thweek started on25th June. Till now I am really enjoying this coding period and I am improving my development skill.
In the 4th week of the coding period, I have done the following thing with the cost of care flutter application.
1- Added Search bar to compare hospital screen.
2-Compare Hospital Screen UI modified.
The 4th-week tasks were quite easy. During the 4th week, I sent a Merge request to the development branch as I told you earlier.
This is all about 4th week of the coding period. I really enjoyed it.
See you next week.
CHECK-IN WORKFLOW :
Valid search parameters for this search are: [_id, _language, _lastUpdated, active, address, address-city, address-country, address-postalcode, address-state, address-use, birthdate, death-date, deceased, email, family, gender, general-practitioner, given, identifier, language, link, name, organization, phone, phonetic, telecom]`
GET [base]/Patient?identifier=[value]
GET [base]/Patient?name=[value]
POST [base]/Patient
GET [base]/Patient/[_id]
PUT [base]/Patient/[_id]
check-in.mp4 from Shashwat on Vimeo.
APPOINTMENT WORKFLOW :
SCHEDULE TAB
Valid search parameters for this search are: [_id, _language, _lastUpdated, active, actor, date, identifier, service-category, service-type, specialty]`
GET [base]/Schedule?actor=Practitioner/[value]
GET [base]/Schedule?actor=Location/[value]
GET [base]/Schedule?actor=Patient/[value]
GET [base]/Schedule?service-category=[value]
POST [base]/Schedule
GET [base]/Schedule/[_id]
PUT [base]/Schedule/[_id]
SLOT TAB
Valid search parameters for this search are: [_id, _language, _lastUpdated, appointment-type, identifier, schedule, service-category, service-type, specialty, start, status]`
GET [base]/Slot?status=[value]
GET [base]/Slot?schedule=[value]
POST [base]/Slot
GET [base]/Slot/[_id]
PUT [base]/Slot/[_id]
APPOINTMENT TAB
Valid search parameters for this search are: [_id, _language, _lastUpdated, actor, appointment-type, based-on, date, identifier, location, part-status, patient, practitioner, reason-code, reason-reference, service-category, service-type, slot, specialty, status, supporting-info]”`
GET [base]/Appointment?actor=[value]
GET [base]/Appointment?status=[value]
POST [base]/Appointment
GET [base]/Appointment/[_id]
PUT [base]/Appointment/[_id]
`
///////////////////////////////////////////////NURSE///////////////////////////////////////////////
FIND PATIENT:
GET [base]/Patient?identifier=[value]
GET [base]/Patient?name=[value]
once the nurse selects a specific patient the patient Id will be
stored by a redux action. So the patient id below will be fetched from
the redux store via a useselector hook to get the appointments and the
encounters related to a specifc Patient entry.
GET [base]/Appointment?actor=Patient/[_id]
GET [base]/Encounter?subject=Patient/[_id] /* TO GET ALL THE ENCOUNTERS SPECIFIC TO A PATIENT */
POST [base]/Encounter /* TO CREATE AN ENCOUNTER */
PUT [base]/Encounter/[_id]
GET [base]/AllergyIntolerance?patient=Patient/[_id]
POST [base]/AllergyIntolerance /* TO CREATE AN ALLERGY ENTRY FOR THE PATIENT */
the fhir resource used here is MedicationStatement : this resource
has a reference to the patient, this resource is required to know what medications the
patient has been taking / taken before.
GET [base]/MedicationStatement?subject=Patient/[_id]
POST [base]/MedicationStatement /* TO CREATE A MEDICATION-STATEMENT ENTRY FOR THE PATIENT */
IN FHIR THERE IS NO SEPARATE RESOURCE FOR VITALS , VITALS CAN BE TAKEN
AS AN ENTRY OF OBSERVATION RESOURCE UNDER THE CATEGORY OF VITAL-SIGNS
GET [base]/Observation?category=vital-signs&subject=Patient/[_id]
POST [base]/Observation /* TO CREATE AN OBSERVATION ENTRY FOR THE PATIENT VITALS */
PUT [base]/Observation/[_id] /* TO EDIT OBSERVATION ENTRY FOR THE PATIENT VITALS */
/////////////////////////////////////////PROVIDER/////////////////////////////////////////
FIND PATIENT:
GET [base]/Patient?identifier=[value]
GET [base]/Patient?name=[value]
once the provider selects a specific patient the patient Id will be
stored by a redux action. So the patient id below will be fetched from
the redux store via a useselector hook to get the
encounters related to a specifc Patient entry.
GET [base]/Encounter?subject=Patient/[_id] /* TO GET ALL THE ENCOUNTERS SPECIFIC TO A PATIENT */
PUT [base]/Encounter/[_id]
similar to the above case once an encounter is selected its id will be stored via redux store
GET [base]/Observation?encounter=Encounter/[_id]
POST [base]/Observation
GET [base]/ServiceRequest?encounter=Encounter/[_id]
POST [base]/ServiceRequest
////////////////////////////////////////////PRESCRIBER//////////////////////////////////////////
FIND PATIENT:
GET [base]/Patient?identifier=[value]
GET [base]/Patient?name=[value]
once the nurse selects a specific patient the patient Id will be
stored by a redux action. So the patient id below will be fetched from
the redux store via a useselector hook to get the
encounters related to a specifc Patient entry.
GET [base]/Encounter?subject=Patient/[_id] /* TO GET ALL THE ENCOUNTERS SPECIFIC TO A PATIENT */
PUT [base]/Encounter/[_id]
similar to the above case once an encounter is selected its id will be stored via redux store
GET [base]/AllergyIntolerance?patient=Patient/[_id]
the fhir resource used here is MedicationStatement : this resource
has a reference to the patient, this resource is required to know what medications the
patient has been taking / taken before.
GET [base]/MedicationStatement?subject=Patient/[_id]
the fhir resource used here is MedicationRequest
POST [base]/MedicationRequest
CHECK-IN WORKFLOW :
Valid search parameters for this search are: [_id, _language, _lastUpdated, active,
This was the third week of the coding phase and it was good. This week majorly, I had worked on the way to play the video blobs that I had fetched from the server last week and showing the pdf files in the app. I had worked on the offline support of the application as well. Let’s see everything in detail-
Added a feature to play all type media files in app
We have 2 types of media files videos and pdfs which we have to show in our app’s media section. Let’s see what approach I had chosen to achieve that and why?
fileOpener2
plugin, I had opened that file using the supported apps that are already available on the user’s device.This seems simpler but really It takes a lot of time to figure out how actually Cordova handles all this stuff🤔because I am not much familiar with the Cordova platform but finally I had made it. Actually, for videos I had initially used the most popular thing that is iframe but it didn’t work and I had changed it with the video element of HTML, and it gets working 😍. But the problem arises when I am trying to open pdf because in the browsers there are inbuilt plugins that handle pdf opening and all but for the android, there is no such thing therefore I need to use external apps with the help of fileOpener2
plugin.
Final Output: Now one can upload pdfs and videos on the dhis2 resources and it will be fetched to our trainer app and can be accessed with the internet.
Have a look at the PR related to this task: Added mediaPage and Populated with real data Fetched from bmgfdev instance and Added support to play all media files of type blob (!194)
Added SQLite Database for storing data locally
We want our application to be active in offline mode as well therefore I had decided to use the sqlite database of the cordova plugin. Let’s look at the implementation part of the offline-support of media section:
1. I had made a mhbsTarinerDb and media table to store the data of all the cuments. I will be storing a name (displayName of the document), id (dhis2 id of the document on the server), fileUri (location of the file if this media is downloaded).
2. When the users click on the media tab then a list will be made from the documents available on the local db and it is shown on the tab at the same time an API call is initiated to the server if a user is online and we received a success response we will update our local db and media tab with this new data.
3. When a user clicks on particular media files then as stated earlier a blob object is downloaded and from it, we make a local file and save it to the device, now one point gets added that is we will store its location to our db as well and when the user again clicks on the particular files then similar thing happen means old content is shown till the new one is not downloaded and when new content is ready it gets updated to our db.
This part is almost ready and is under testing you can look at its progress from the offline-support branch of my forked repo. Worked done till now in offline support of media tab is here.
2. Added SQLite database to store data locally and to provide offline support.
by noreply@blogger.com (Bhavesh Sharma) at June 25, 2021 08:05 PM
Basic POC (Proof of Concept) design was completed and successfully implemented using WebXR and react. A JSON file was created which stores the links and names of all image file present in the directory.
The BiT model was converted using TensorflowJS model converter. The model then generated was used in POC. Images are cast on a Curved Plane in the VR environment, on which model inference runs.
Every 5 Seconds a new image is loaded and inference is given with 4 seconds. The labels are displayed as Text on top of the Plane in the VR environment.
The Video demonstrates a sample scenario. The inference reports undefined because the model was not able to classify the successfully
That's all for today!!!
Hope you had a great week
Do svidaniya
Hey, Reader's we are in 3rd week of the coding period.
The 3rd week started on 18th June.
In the 3rd week of the coding period, I have done the following thing with the cost of care flutter application.
1- Implement Unit testing for inpatient procedure.
2-Implemented Unit test for outpatient procedure.
The 3rd-week tasks were quite easy. During the 3rd week I sent a Merge request to the development branch.
This is all about 3rd week of the coding period. I really enjoyed it.
See you next week.
The second week of codding has been passed and I am getting more dive into the huge code base of the mHBS Tracker and Trainer app. Let’s have a look at what I had done this week :
Find out the Cause of Issue-1As I stated in last week blog that we were getting the build error like ‘STRING_TOO_LARGE’ which is not affecting the Gradle build in making the apk but is shown as an error while building the app. This is really a very time-consuming error, I had tried a lot of possible solutions to fix it let’s discuss all of them one by one :
aapt dump — values resources dhis-debug.apk | grep -B 1 ‘STRING_TOO_LARGE’
I had tried both of the above well-known approaches and finally, I have no option other than to check for each and every resource files that can cause this error.
Finally, in the end, I found my culprit that is not actually in our mHBS app. Basically, we are using the dhis2-android-sdk in our app that is used as a submodule and has very large drawable files in its core package and causing this error. Since it is a submodule therefore this can be solved by dhis2 itself.
Find out the Cause of Issue-2We also had one more issue to solve that we found when we had tried to build the dhis2 app on JDK higher than 8 is ‘NoClassDefFoundError: javax/annotation/Generated’ means dhis2 app is not building with JDK higher than 8. This error can be resolved by adding some dependencies ( Maven Repository: javax.annotation » javax.annotation-api » 1.3.2 (mvnrepository.com) ) to build.gradle. But again these changes should be done on another submodule that the dhis2 app is using i.e. dhis2-rule-engine and can be resolved by dhis2 itself.
Uploaded dummy data on bmgfdev instance of mHBSWe want functionality like that we will be able to upload videos and pdf type resources on the website in the bmgfdev instance and able to fetch it and show it in the trainer app. For now, I had added 6 videos from Webinars (aap.org) and one more test video to the resources of the bmgfdev instance. Check Here
Fetched and shown the Video Resources in media tabThis part is slightly pending due to some problem. I had written down all the HTML code for UI and javaScript code for Api Calls for playing the video media but I am getting some error while hitting the bmgfdev instance API from the browser due to the CORS policy of the browsers. Error is like that -
Access to XMLHttpRequest at ‘https://bmgfdev.soic.iupui.edu/api/document/' from origin ‘http://127.0.0.1:5500' has been blocked by CORS policy: Response to preflight request doesn’t pass access control check: Redirect is not allowed for a preflight request.
I had tested the API and my flow over postman is working fine, as soon as the API get working in the browser all the video files are being loaded in the trainer app. I will try to solve it asap.
by noreply@blogger.com (Bhavesh Sharma) at June 18, 2021 06:47 PM
The previous model trained was not performing well on the the testing dataset. So, I had to try a new approach for increasing the model accuracy as well as other parameters. When I tried training with other pre trained networks, after one epoch model started to overfit. So, I had to get creative.
I Used a BiT model for training, only the last layer was trainable. The model was able to perform better than pervious pre-trained model based on MobileNet.
Regarding the VR Part. The WebXR setup was successful. I was able to set up a basic WebXR environment. TensorflowJs was used for model inference. I used react-framework for binding TensorflowJS and WebXR together. For access to onboard device sensors the WebXR content has to be served through HTTPS.
Model Training Results
MobileNet Based Model: loss: 0.9046 - acc: 0.7174 - f1_m: 0.2450 - precision_m: 301254.4277 - recall_m: 0.2275 - val_loss: 1.0795 - val_acc: 0.6811 - val_f1_m: 0.2366 - val_precision_m: 248592.8750 - val_recall_m: 0.2270
BiT Based Model:loss: 0.6331 - acc: 0.8733 - f1_m: 0.8899 - precision_m: 195757.4552 - recall_m: 0.9773 - val_loss: 1.2562 - val_acc: 0.7955 - val_f1_m: 0.9079 - val_precision_m: 126641.6953 - val_recall_m: 1.0619
Inference Time:
When the SavedModel is converted into TensorflowJS model using the converter few seconds get added up in the inference time. So far, the inference time is not that much and is close if not less than 1 sec.
That's all for day!!
Hope you Had great Week
Ciao
Hey, Reader's we are in 2nd week of the coding period.
The second week started on 11th June.
In the second week of the coding period, I have done the following thing with the cost of care flutter application.
1-Added New bookmark feature
2-add to Bookmark chargemaster.
3-remove to bookmark chargemaster.
4-chargemaster UI modification.
5- added snacbar while adding or removing form bookmark.
During 2nd week I added another commit to the development branch and I updated the Merge request.
This is all about 2nd week of the coding period. I really enjoyed it.
See you next week.