Smart Omix is a no-code SaaS platform for decentralized clinical research. The platform enables researchers to conduct studies independently and collect real-world data, helping advance relevance, equity, and data integrity in clinical research. It’s designed to address technological, economic, and talent barriers commonly encountered by the research community when conducting remote observational studies and eliminates the need for design and engineering expertise to build and launch their studies, democratizing clinical research in the process.
The global clinical trials market size was estimated at $44.3 billion in 2020 and is expected to expand at a compound annual growth rate of 5.7% from 2021 to 2028. Key factors that are driving the market growth include the adoption of new technology in clinical trials, the increasing prevalence of chronic diseases, and demand from developing countries.
Smart Omix was inspired by an initial study conducted on Myasthenia Gravis (MG) patients, where 78% of participants were interested in using an app to monitor their symptoms. Smart Omix is play on the word "Omics", which is a broad interpretation of the biological disciplines that have the -omics suffix (e.g genomics, epigenomics, etc.) characterizing a multi-disciplinary and rapidly evolving set of disciplines focusing on understanding the impact of biological molecules to the cells of a living creature.
Smart Omix focuses on the custom studies space as a way to democratize clinical research by lowering the barrier to participation. Focusing on making the product more participant-centric provides a way to give them a stake into the study. Smart Omix aims to take care of the data integration, allowing the researcher to focus on their job on designing the study and reducing the overall cost to conduct a study without the need to hire developers.
The challenge is creating a digital end-to-end clinical research experience for individual clinical researchers. Based on competitors, we identified that providing self-service automation and passive data integration are key differentiators.
Though we had a good understanding of how to differentiate ourselves among the competitors, we wanted to get a better idea of the mental model and motivations of our potential users. User research, including converging ideas from previous user interviews into an affinity diagram, enabled us to define Dr. Angela Wright, our primary user persona and helped us postulate her user journey.
We learned that researchers don’t wake up every day wanting to design an app. They want to focus on their job of asking the right questions and to serve it up as quickly as possible to get the quality data they need. The challenge was, how do we best align with the current mental model of researchers? How do we create a structure that is least disruptive to what they’re already used to when building a study?
Dr. Wright wants to create a study end-to-end in one platform so she can spend more time doing what she’s good at —asking the right questions.
Most SaaS products take what used to be done in analog, and digitize it for scalability, so we wanted to look into platforms that had this analog-to-digital concept baked into the product. Since the most used product in the clinical research space still functions like it was designed in the late 90s (i.e. lengthy, confusing, and lacking delight), we wanted to learn from divergent platforms that had a more modern design. We looked into Typeform, Asana, and Hubspot, and learned that in order to optimize for self-service automation the product needs effortless facilitation across every step of the study building process.
Working with the product manager and developers, we aligned on an initial list of basic functionality tasks paths based on the following navigation that aligned with the mental models of clinical researchers. This informed the initial informational architecture for the app and the basis of the initial UI that the design was based on.
As we continued to design the UI, we came across several design challenges. Here were just some of the highlights:
Conversations with clinical researchers revealed there is a standard process for designing studies. Creating a navigation structure that mimics this mental model helps familiarize the process on a digital platform, and therefore speed up designing the study.
If from the researcher’s perspective, the main goal of Smart Omix is to build a study as quickly and simply as possible, we had agreed that following the standard timeline for a study aligned best with how they would be building their study in an analog way: from start to finish, and some flexibility in between to make room for revisions. We felt that this was the most scalable option because it was a framework that could apply to all studies of varying complexity, and one that followed a narrative that researchers are already familiar with. Initial user tests validated this.
We explored a top navigation option, with the intention of providing more real estate for the features below. However, we found that as we continued to nest additional pages under a main page, we were concerned about that the constraints of the page width would limit the amount of pages we would potentially nest in the future.
As a result, we redesigned the navigation to be on the left to accommodate a growing list of nested pages, and found that the reduction in real estate of the main pages were negligible given the amount of white space we allocated in the original designs. We also expanded on the original nav to include additional pages, and validated with internal subject matter experts that the changes was more effective at communicating the requirements of the product, while still keeping the integrity of the flexible linearity of the navigation.
Passive data integration is a critical feature differentiating Smart Omix from its competitors. We also know that acquiring quality data is crucial to researcher’s success, and that the ability to add more novel ways to collect data is key to expand the way they collect data. To support these needs, we created a data integration dashboard, filled with cards of active integration options, such as daily step count and GPS. Each card enables users to seamlessly add the integration requirement to their study. Researchers also want to know how their data would look like when downloaded, so we show them examples of what that table could look like. Lastly, researchers can also view how the permissions would look like on participant’s end, keeping design top of mind. Due to dev constraints, we decided to push this feature to a later release.
Creating questions for clinical research is one thing. Designing it for a mobile phone is another. We understand how important it is for researchers to make sure they can create their study seamlessly, and also look great too. In this feature, researchers can create their questions (i.e. ePROs) while seeing their design come to life instantaneously. Once they create a handful of questions, the ePRO dashboard serves as a central hub where users can reorganize and manage questions at the schedule level - always having a full overview of their whole study. All without having to write a line of code.
A complete ePRO requires cadence, a name, and a question. The cadence dictates the frequency in which the question is asked to the participant, the name helps the participant understand the topic of the question, and the question itself is the main driving force to collecting pertinent data for the study. As a result, this short and linear approach to completing an ePRO helps reduce errors by focusing on one step at a time and gives researchers manageable goals to complete so they feel its attainable.
Every study must go through a review board before being available to the public. We had to consider a flow that guides researchers through the review process while maintaining the correct version of their study. Researchers need a way to lock their study before submitting it to the review board, as a way to ensure that no new changes could be made before the review has happened. Unlocking their study, if needed, generates new versions that would be logged and archived. Badges were most effective at communicating this dual status of version and study state.
We also understand that for researchers, quality data is crucial, which means making sure that enrolled participants were high quality as well. This largely translates into adherence, and we believed that tracking and tagging users’ adherence rates would be a simple way for researchers to manage their participant list. However, due to dev constraints, tracking adherence would be pushed to a future release. We prioritized the status of the participant in the enrollment funnel instead as this was a crucial engagement indicator.
Another way researchers can manage their study is assessing the health and make-up of the study’s enrollees. Once live, researchers can see a live dashboard of enrolled participants’ enrollment, demographics, adherence, and platform engagement. Enrollment and adherence in particular can signal whether changes in the study need to be made. Iteration 2 was a simplified version as a reaction to development constraints. We decided to reduce to the essential data points first, such as enrollment status and retention.