The term "Big Data" is a few years old, but its implications for medical devices is at an inflection point.
As the reader may know, Big Data refers to data sets of enormous scope (think terabytes of data). Historically, many of these databases have contained highly dimensional data about the activity of people online; the advertising platforms of Facebook and Google - the backbone of their businesses - are examples of products that are informed by big data sets. Applications aren't limited to the web however; Walmart's transaction databases are estimated at 2.5 petabytes (1 PB = 1,000,000 GB).
What we can learn from this is that many successful companies have discovered that their ability to collect and analyze enormous amounts of data gives them a significant competitive advantage; if not the core value proposition of their organization.
So, how does this apply to medical devices?
Health-related data is becoming more abundant. People care less about privacy than they used to and, for those who do, HIPAA has well-documented guidance for de-identifying Protected Health Information. Still more data is produced and stored by devices from consumer health companies such as Fitbit and Withings that cater to the growing quantified-self market.
Bottom line: there is precedent for collecting data on a large scale... so what data should you capture?
The answer depends on your specific application, but here are some ideas you should consider:
- Could you make a better walking brace with data on millions of steps taken by thousands of users? What if you also had information on their recovery time?
- Could you make a better surgical robot with data on the position of every end-effector at every second of every surgery it ever performed?
- Could you make a better ventilator with flow/pressure/heart rate/O2 Sat/etc. data on millions of inhalations and exhalations?
Now imagine your company had such a data set... and your competitor didn't. As that data set gets larger and your products become more data-driven a network-effect occurs. Who would want to buy a competitor's product that is based on an inferior data set? You would be Google, your competitor would be Yahoo.
Analyzing Big Data can be hard. The hurdles are both logistical and analytical:
- Logistically, Big Data can't be loaded into a laptop’s RAM (i.e., you won't be opening it up in Excel). To be able to “look at” Big Data, specialty tools such as Hierarchical Data Format or Hadoop may be required.
- Analytically, machine learning techniques such as neural networks may be required if a pattern or trend can't be isolated using strictly mathematical methods. Such techniques, while well-understood, differ from traditional statistics and can have a bit of a learning curve.
A company cannot perpetually compete while operating on less data than their competitors. Ever-increasing saturation of connectivity (RFID, Wi-Fi, Bluetooth, NFC) is allowing us to collect and store more and different data than ever before. The critical question is; what kind of data can give you an edge?
Over the past 2-3 years, most health care providers in the U.S. have completed the transition to Electronic Medical Records (EMR). Ultimately, the adoption of EMR is meant to make health care more efficient and less expensive while improving a patient’s quality of care by making their medical history readily available to all of their healthcare providers.
As users in the field are gaining experience with EMR, however, usability problems have emerged that are a result of the User Interaction design of some of the many EMR software packages that are currently in use. The designers and UI engineers developing EMR software need to address some of these problems in order to make EMR more effective and to reduce the likelihood of dangerous mistakes. The following common usability problems exist today:
1. Patient Identification Errors
Patient identifiers (e.g., EMR Number; patient name; date of birth) are not clearly displayed or selectable onscreen, resulting in treatment actions with potentially harmful consequences performed for one patient that were intended for another patient. (1)
Information is displayed in a confusing format, which can lead to a patient’s receiving the wrong medication. Data related to medication is displayed in a manner that makes it easy to miss. For example, a physician prescribes a medication containing sulfate to a patient with a sulfate allergy because allergy information within the EMR is not clearly emphasized or is difficult to locate on the page. (1)
2. Delay in Treatment Events
Poor EMR page design leads to a delay in critical patient care activities. For example, a patient’s surgery is delayed because an alert about an abnormal lab test result was not displayed clearly and in a manner designed to signify its importance. (1)
Clinicians perform critical tasks, or steps in a task, out of order. For example, a patient with a fever may have a blood culture performed, followed by intravenous antibiotics. If antibiotics are given prior to the blood culture, the sensitivity of the culture decreases dramatically. EMRs that support providers in the order of events are more likely to reduce order errors. (1)
3. Use of Technical Jargon
Based on several interviews with physicians who are currently working with EMRs, we discovered that onscreen lab reports often contain technical jargon that may be not be familiar to all healthcare providers, prompting the user to look up a term online. Most current EMR reports do not have an embedded appendix or glossary of terms.
4. Lack of Readability
Developers often base their EMR design on alphanumeric data fields rather than on compelling and easy-to-scan visual elements like charts, graphs, and color schemes that can be helpful to users who must quickly read and process information displayed onscreen. For example, if the report contains an abnormal score, it should be clearly displayed using alerting colors and contrasting type styles, to capture the physician’s attention. (4)
5. Inconsistent Formats
Because each hospital or practice may use a different vendor’s EMR, doctors can often encounter several different EMRs during the course of a work day, each of which uses a different format for displaying information. This lack of consistency can make it difficult for physicians to find information quickly.
In order to make EMRs easier to use and reduce the patient treatment errors that result from flawed information design, we offer a few ideas for UI design:
Interactive Graphical Treatment Timelines
By incorporating interactive graphical treatment timelines that track the cause-and-effect details of a patient’s healthcare process, the health care provider is able to quickly see the patient’s pathology and treatment history in a way that is useful and intuitive. (2,5)
Effective Use of Color
Use a color system to differentiate data points and make it easier for the user to visually map fields and values. For example, if a patient’s white blood cell count comes back as trending lower, it could be indicated in red. (6)
Group Data Fields Where Appropriate
Information should be placed onscreen near other data with which it is often viewed. For example, a patient’s blood pressure report should be placed near the lipids (cholesterol) report as they are often linked and reviewed in the context of the patient’s overall cardiovascular health. (4,5,6)
Help and Reference Documentation
Incorporate a Help section and reference appendices into the EMR screens so that the user can find them quickly and access them easily. (6)
- B. Shneiderman, "Tragic Errors: Usability and Electronic Health Records," Feature, Nov. and Dec. 2011.
- L. Lins, M. Heibrun, C. Silva, "VISCARETRAILS: Visualizing Trails in the Electronic Health Record with Timed Word Trees, a Pancreas Cancer Use Case," Workshop on Visual Analytics in Healthcare, pp. 13-16, 2011.
- C.B. Teston, "Investigating Usability and ‘Meaningful Use’ of Electronic Medical Records," Sigdoc, pp. 227-232, Oct. 2012.
- K. Wongsuphaswat, D. Gotz, "Outflow: Visualizing Patient Flow by Symptoms and Outcome," pp. 25-27, Aug, 2011.
- Z. Zhang, F. Ahmed, A.Mittal, "AnameVis: A Framework for the Visualization of Patient History and Medical Diagnostics Chains," Visual Analytics in Healthcare, pp. 17-20, 2011.
- R. Pereira, J. Duarte, M. Salazar, "Usability Evaluation of Electronic Health Record," Int. Conf. Biomedical Eng. Sciences, pp. 361-364, Dec. 2012.
These are exciting times in the drug delivery industry. A host of new delivery platforms is in development, some of which have recently reached the market. The primary goal of these developments is to create systems that optimize a drug’s therapeutic value, but it’s also believed that finding better ways to get a drug into a patient’s system in a safer and more consistent way will lead to better compliance and outcome. Additionally, it’s estimated that up to 50 percent of new drugs can’t be taken orally, so the impetus to create innovative delivery platforms is strong and growing. Finally, an aging population, a growing demand for medications that can be self-administered at home, and the increased incidence of chronic diseases such as diabetes are other important factors driving growth in drug delivery techniques.
It’s estimated by one study that the worldwide market for the 10 most popular drug delivery methods (including oral) will reach $81 billion in 2015. Another report puts the market significantly higher, at $142 billion in 2012. Whatever the market size, it’s clear that these new technologies have the potential to revolutionize patient care. Here’s a brief rundown of promising and novel drug delivery systems.
Nanotechnology, according to one definition, is the “engineering and manufacturing of materials at the atomic and molecular scale.” As defined by the National Nanotechnology Initiative, nanotechnology refers to structures measuring roughly 1-100 nanometers (nm) in at least one dimension, and are developed by top-down or bottom-up engineering of individual components. So-called “nanomedicine” is considered to be one of the most promising drug delivery platforms ever developed, and is being used to deliver both new compounds and previously approved drugs:
- siRNA (small inhibitory RNA) is a bit of genetic material that interferes with gene expression. Researchers at several institutions have been loading siRNA into silicon nanoparticles to deliver it to an ovarian cancer gene. Results so far indicate it may reduce ovarian tumor size by up to 83 percent;
- A lipid nanoparticle is being studied as a drug delivery system for orphan diseases, such as rare liver disease;
- Magneto-electric nanoparticles are being developed as vehicles for delivering and releasing the anti-HIV drug AZTTP into the brain; and
- Sugar-sensitive nanoparticles that release glucose may revolutionize diabetes treatment.
Skin patches are another hot area in drug delivery development:
- The FDA recently approved NuPathe’s patch for treating migraine headaches. Zecuity™, says the company, is a "single-use, battery-powered patch that delivers the most widely prescribed migraine medication through the skin";
- The Nanopatch, a silicon patch that’s smaller than a fingernail, is made of thousands of microprojections coated with a vaccine. It’s held against the patient’s skin and the microprojections penetrate the outer layer of skin to deliver the vaccine; and
- Purdue University researchers, (perhaps inspired by beer), have created a tiny fermentation-powered pump that requires no batteries and may be useful for powering transdermal drug patches to deliver drugs for treating cancer and autoimmune diseases that previously couldn’t be delivered with a patch due to the large molecule size of these medications.
Powder inhalation delivery has long been used for treating diseases such as asthma. A promising new application for this technology is in treating diabetes through inhaled insulin therapy. MannKind Corporation’s Phase 3 clinical trials are investigating the performance of its insulin delivery treatment. The product is a simple inhaler device combined with insulin inhalation powder pre-measured into single-use cartridges.
While this technology is not new, current research efforts are focusing on devices that are lighter and easier to use. Needle-free jet injection devices produce a high-velocity “drug jet” that enables today’s larger molecule, protein-based drugs to penetrate the skin. One such device, developed by MIT, is said to improve on older jet-injection platforms by delivering programmable and adjustable doses, making this delivery system more useful for treating sensitive populations, such as elderly or pediatric patients.
CeQur has received European approval for its PaQ insulin delivery technology. CeQur’s device attaches to a patient’s abdomen and insulin is delivered subcutaneously through a cannula from an onboard reservoir.
A novel gel material capable of releasing drugs in response to patient-applied pressure is getting close attention from researchers. This new gel releases a test drug in response to a stimulus that mimics finger pressure. Delivery platforms like this may help patients who need fast drug administration, such as asthma sufferers or those with acute cancer pain.
These new generation drug delivery technologies hold great promise to deliver better care to patients around the globe.
Farm's Director of Research and Usability, Beth Loring, and Senior Industrial Designer, James Rudolph, recently presented at the UXPA Boston 12th Annual User Experience Conference on May 29, 2013, at the Sheraton Boston in Boston, MA.
The UXPA Boston annual conference covers critical topics in usability and user-centered design with practitioners, students, and experts in the field. Beth Loring and James Rudolph will present "Watch the Sterile Field! Conducting Research in the OR."
The presentation offers practical advice and tips based on recent experiences and lessons learned through more than 75 international OR observations. The presentation covers:
- Recruiting surgeons and their teams
- Gaining access to procedures, including credentialing
- What to expect when you arrive at the hospital
- Patient confidentiality and HIPAA Etiquette and attire
- What happens before, during, and after the case
- Taking photos and recordings
- Differences between the U.S. and other countries
- A technique for visualizing, exploring, and analyzing data
It is well documented that different evaluators conducting usability evaluations of the same product often come up with disparate findings. Does the suspect reliability of usability evaluations mean we should stop conducting them all together? The answer is a clear and resounding no.
The reality is that usability evaluations can yield different results, depending on the way in which they are conducted. More importantly, however, usability evaluations identify important use-related challenges and hazards, and provide insight into how a design can be improved.
In this blog, we’ll explore challenges associated with conducting reliable usability evaluations and offer insights as to how to overcome these challenges. We’ll also discuss how to improve usability testing practices to ensure we are identifying the most important issues.
Usability Evaluation Methods
There are two primary methods of evaluating the usability of a product: (1) usability tests and (2) expert reviews. The key difference between the two is that usability tests are conducted with representative users, while expert reviews (e.g., heuristic analyses, cognitive walkthroughs) are typically performed by usability professionals and/or domain experts. Both methods use a set of tasks that help evaluators identify usability issues, and in both methods usability professionals analyze the data in order categorize problems, rate them according to a defined severity, and offer recommendations for design improvement. While there are numerous methods for evaluating usability, one thing is clear: different evaluators can produce different results.
Overlap of Usability Issues
One might reasonably assume that expert usability professionals conducting different evaluations of the same product would uncover the same usability problems. Unfortunately, the reality is quite different. Numerous studies have explored this issue, the most prominent being the Comparative Usability Evaluation (CUE) series. A striking portrait of the lack of overlap is painted when you look at the number of unique issues reported by single teams across the first four studies of the series:
- CUE-1 – 91%
- CUE-2 – 75%
- CUE-3 – 61%
- CUE-4 – 60%
The CUE-4 study represents the most comprehensive comparison of usability studies to date, involving 17 usability teams in total, nine of which performed expert reviews and eight of which conducted usability tests (Molich & Dumas, 2008). As seen above, 60% of all problems reported were identified by only one team. Many others have found strikingly similar overlap in their own research. See, for example, Jeff Sauro’s blog “How Effective are Heuristic Evaluations?”
Factors Affecting the Reliability of Usability Evaluations
A large part of the problem can be attributed to the variables affecting both types of usability evaluations. We’ll discuss several of the most important variables affecting usability evaluations, and offer some practical insights as to how to reduce their impact on reliability. This list is far from comprehensive, and we invite readers to add additional variables to the commentary at the end.
Task selection. Task selection is an important aspect of both usability tests and expert reviews because the tasks performed greatly affect the interaction that test participants and/or evaluators experience. As Molich and Dumas point out, “A usability problem—even a critical one—normally will only be discovered if there are tasks that focus on the interface where the problem occurs” (p.275). Development teams should define the primary operating functions and frequently used functions, and then create tasks that will allow participants to interact with these areas. In medical device development, tasks should also be created to address potential use-related hazards defined during risk analysis.
Test protocol. Unfortunately, evaluation teams may use different instructions and ways of interacting with participants during a usability test, and these subtle differences can bias the results. At Farm each moderator closely follows the same protocol. We also conduct pilot sessions to ensure participants fully understand the questions and are not biased by the way questions are framed. During formative testing, the think-aloud protocol may also uncover instances where the task instructions are misleading the user and/or causing confusion. For a more comprehensive discussion of creating a successful protocol, see Beth Loring and Joe Dumas’ “Effectively Moderating Usability Tests.”
Categorization of problems. The categorization of usability problems is also important. In the CUE-4 study, participants were asked to use predefined categories. The authors found that identical issues were sometimes classified as positive findings and sometimes classified as usability issues. In other studies, including previous CUE studies, evaluators were asked to define their own categories and scales. The language used to define problems will undoubtedly affect the way people, including clients, understand test results. It is important that categories be easily defined and understood by the development team.
Domain knowledge. In a study of heuristic evaluations conducted by Nielsen (1992), “double experts,” or usability experts with extensive knowledge of the specific domain being studied, performed better than usability professionals without domain expertise. It is sometimes suggested that usability professionals who become too knowledgeable about a device can lead test participants during the evaluation, but we have found that evaluators who take the time to understand a product will produce a better and more relevant list of issues than those who do not.
Providing recommendations. There is no hiding the subjective nature of providing recommendations. Nevertheless, this is a critical juncture in the process, one that represents a shift from research to solving problems. Some common problems include: overly vague recommendations, recommendations that are in direct conflict with business goals, recommendations that reflect personal opinion alone, and implicit recommendations. To avoid some of these pitfalls, it is critical that evaluators provide solid evidence for how a recommendation supports the issue that is uncovered. Similarly, we have found that recommendations are most useful when the usability team has been closely involved in the development process.
It is important to know that in a medical device summative report, third-party evaluators such as Farm are not supposed to suggest how an issue will be mitigated. We simply report the issue and provide the root cause from the user’s perspective. It is up to the device manufacturer to report to the FDA how they fixed the problem and re-tested the issue.
The Value of Usability Testing
According to available research, the results of usability testing and expert reviews can be inconsistent across evaluators. Fortunately, they can be made more reliable by applying rigor to various aspects of usability evaluations, including the test protocol and task selection. A usability evaluation, while based on the fundamental principles of behavioral science, is a tool used to provide better and safer products, and it should be judged on its ability to inform design change, to improve the user experience, and to improve the safety of medical devices. The science of evaluating products is not perfect, but if we keep the end goal in mind, we will have a better appreciation for the positive impact that usability evaluations have on product development.
Molich, R., & Dumas, J.S. (2008). Comparative usability evaluation (CUE-4). Behavior & Information Technology, 27(3), 263-281.
Nielsen, J., and Molich, R. (1990, April). Heuristic evaluation of user interfaces. CHI 1990 Proceedings, 249-256. Seatle, Washington.
Massive Market and Growth
The growth of the mHealth industry has product developers and software engineers increasingly focused on the regulatory acceptance criteria they may face when developing these apps. Research2Guidance, a global mobile research group, states that 500 million smartphone users are expected to be using mHealth medical apps by 2015. Today there are approximately 97,000 mHealth medical apps available, and sales of medical apps are expected to reach $26 million by 2017.
In the Farm blog The Rise of Mobile Health and the Importance of Human Factors, many of the drivers contributing to the explosive growth of mobile software development are highlighted, including economic and technology trends and the ubiquity and convenience of mobile platforms. It’s clear that healthcare information, made available to both consumers and healthcare providers via mobile devices, has the potential to reduce healthcare costs and improve care across multiple point-of-care environments.
To provide clarity and direction for mHealth medical app developers, the U.S. Food and Drug Administration (FDA) has developed guidance outlining the suggested path to approval and including a set of definitions on what is and is not regulated. The FDA differentiates between a wellness app and a potential medical device (such as a device that uses an mHealth medical app). This is an important distinction for developers, since the FDA has said it will not regulate wellness apps, and thus the documentation burden for the developer falls within more traditional development practices. Per these guidelines, an mHealth medical app will be regulated if it is:
- An app that displays, stores, analyzes, or transmits patient-specific medical device data
- An app that transforms or makes a mobile platform into a regulated medical device
- An app that performs actual medical device functions
- An app that allows users to input patient-specific information and provides patient-specific results, diagnosis, or treatment recommendations used in clinical practice or to assist in making clinical decisions
Present and Evolving Guidance
The current guidance for mobile medical applications was issued in 2011. This guidance provides some support to manufacturers by categorizing mHealth medical apps by risk to patient and addressing the level of development (including testing) documentation expected for each risk class. This guidance helps the developer determine accurate costing and time estimates. These guidelines also help determine which of the existing mHealth medical apps are to be classified as Class I (general controls), Class II (special controls in addition to general controls), or Class III devices awaiting premarket approval (PMA). Class I devices or MDDS (medical device data systems) are considered to be the least risky, and the FDA has exempted them from any regulatory requirements. mHealth medical apps that provide electronic transfer, storage, or display of medical device data qualify as Class I devices. Class II devices require filing for 510(k) clearance and include products such as mHealth medical apps that display radiological images for diagnosis. Class III devices requiring PMA may require clinical trials if the app is novel (with no predicates). An mHealth medical app also must be FDA approved if it is an extension of an FDA-approved and regulated medical device.
The FDA is making an effort to address concerns and provide oversight on guidance issues. It is important to be aware that congress is also asking for clarification on this effort. According to MobileHealthNews, as of March 15, 2013, a letter has been sent to the FDA seeking clarification on the agency’s intentions:
In an attempt to fill the gap, the guidance goes on to call out additional standards and regulations for subjects such as:
- Software verification and validation
- Off-the-shelf software used in medical devices
- Cybersecurity for networked devices containing off-the-shelf software
- Radio frequency wireless technology in medical devices
These regulations and guidelines, as well as the guidance for the content of premarket submissions for software contained in medical devices, provide insight into what is expected in a premarket submission for performance and process documentation, the potential high-level risks an app may encounter, and areas where field issues have occurred. By reviewing these publications, an mHealth developer who has little or no experience working in the regulated medical device market will have the key information required to develop a plan and a process for creating, testing, and documenting an app that can be submitted with the best chance for clearance.
Software Development Regulation
Critical to mHealth software development is guidance for the development process itself. The FDA does not author its own standards, but has chosen to recognize the European standard for software development ANSI/AAMI/IEC 62304:2006. A detailed discussion of this standard is included in the online article Developing Medical Device Software to IEC 62304. The European standard uses a patient risk method for identifying risk associated with device software, and relies heavily on ISO 14101:2007 for a risk management approach to be followed throughout software development.
For mHealth developers who are unfamiliar with regulated software development, Happtique, a mobile health software development company, has created a voluntary program that provides a set of interoperability guidelines. Designers who follow these guidelines and apply to have their mHealth medical apps tested will receive certification for their mHealth medical apps, informing the consumer that they have achieved this level of interoperability. The program has been reviewed by the FDA and is seen as a complement to its regulatory requirements. This is covered in the article Happtique Publishes Final Standards for Mobile Health App Certification.
While these numerous standards, regulations, and guidances are complex and may prove confusing, there are some basic steps that developers can take in order to provide a clear and predictable development path:
- Thoroughly understand the app’s intended use. It’s important to be able to define the intended use for an mHealth application and communicate its use precisely, including benefits the app provides to the end user and patient and, if possible, including what the app is not intended to do. Not only will this have a positive impact on the marketing of an mHealth app, it helps define the depth of process and documentation needed throughout the development process
- Follow existing FDA and international guidance relating to communication, electrical, and platform hardware, so that key risks can be avoided. Create a development process that will identify risk factors contained within hardware components, wireless protocols, operating systems, and platform-specific interferences, and from both a technical and a user perspective. Identify critical issues and create an approach that will minimize or eliminate them. Consider using FDA’s MAUDE database in order to mitigate risk
- Create a software development process according to the ANSI/AAMI/IEC 62304:2006 standard in order to support safe software that meets performance requirements
- As defined in the blog article The Rise of Mobile Health and the Importance of Human Factors, follow ANSI/AAMI HE75:2009 to ensure that the mHealth app will be safe and easy to use
- Consider an iterative implementation approach, rank risks and design mitigations around the highest risks. Implement and test those first, both for performance and for usability (by testing with real users). Build small increments and test those, continuing until all functionality is implemented and the risks have been minimized or eliminated
- For mHealth apps that will be used on multiple platforms, target implementation by most-to-least market impact, then go through the same iterative development path
- According to regulations, developers must monitor released applications for safety and effectiveness issues. mHealth developers should follow FDA CAPA guidance to improve the development process
- Finally, use well-established development and design, and incorporate testing tools, to reduce the probability of defect introduction
Home healthcare and the use of medical devices outside of the professional healthcare environment are on the rise. Modern medicine allows us to live longer and provides those with chronic diseases the ability to receive medical care at home. Examples of home-use devices are oxygen concentrators, hospital beds, sleep apnea monitors, body-worn nerve and muscle stimulators, and dialysis machines, just to name a few.
According to the NAHC (National Association for Home Care & Hospice), approximately 7.6 million individuals are receiving home healthcare in the United States from roughly 17,000 paid providers. Not only does home healthcare improve recipients’ quality of life, but it also provides cost savings. Looking at the chart below, you can see the cost advantages to receiving care at home.
Despite the advantages, employment of devices outside of professional healthcare facilities increases the risk of harm through unintended or potential misuse, driving an even greater need for devices to be designed using human factors principles to mitigate these risks.
In the past, manufacturers of home healthcare equipment were required to comply with IEC 60601-1, demonstrating that their designs mitigate the risks associated with use in the home by patients or caregivers. In 2010, a new provision, IEC 60601-1-11 was published, turning attention specifically towards Requirements for Medical Electrical Equipment and Medical Electrical Systems Used in Home Care Applications. Also in 2010, the FDA started the Medical Device Home Use Initiative to "ensure the safety, quality, and usability of devices labeled for home use." Accordingly, the agency states that it will take the following actions to support the safety and safe use of medical devices in the home:
- Establish guidelines for manufacturers of home-use devices;
- Develop a home-use device labeling repository;
- Partner with home health accrediting bodies to support safe use;
- Enhance post-market oversight; and
- Increase public awareness and education.
These steps will help address the challenges associated with the use of medical devices in the home and provide greater protections for patients receiving home healthcare.
As part of the initiative, a new draft guidance has been written to help manufacturers "design risk out of the device." Draft Guidance for Industry and Food and Drug Administration Staff, Design Considerations for Devices Intended for Home Use is meant to provide advice and summarize other guidance documents available to manufacturers, citing over 24 documents and international standards. It is important to remember that the guidance was created to help manufacturers understand all variables that should be taken into account when designing a home-use device, but that it is not possible to follow all of the guidelines simultaneously. Designers should follow the guidelines to the extent possible, which will help the FDA to evaluate the device’s requirements, functionality, and safety.
How are “home-use devices” and “environments” classified?
A home-use device is a medical device intended for users in any environment outside of a professional healthcare facility or clinical laboratory. The term includes devices intended for use in both professional healthcare facilities and homes.
A user is a lay person such as a patient (care recipient), caregiver, or family member who directly uses a device or provides assistance to the patient in using the device.
A home is any environment other than a professional healthcare facility or clinical laboratory where a device may be used.
Note that the word "home" is being used loosely to mean ANY location in which you use the device outside of a professional healthcare facility or clinical laboratory. Thus, it could be your primary home, your vacation home, your car, public transportation, outdoors, or any other non-clinical location.
It is vital to take into consideration the potential environment(s) where the device may be used. Just a few of the challenges are:
- There may be few electrical fixtures and they may not be up to code;
- The lighting may be poor;
- The home could be quieter or louder than a hospital environment. For example, children or a loud TV can add to noise level;
- Homes, vehicles, and public spaces are not always designed to mitigate maneuverability barriers; and
- Sterility and cleanliness in a home will not be the same as in a clinical setting.
So what should manufacturers do when designing their next home-use device? Consider the following high-level summary based on my review of ANSI/AAMI HE75:2009, Section 25, on home healthcare and the draft guidance on home use:
- The User - Front and center to the design of any product is the user. Who is the ultimate user of the device: the patient, the caregiver, a visiting nurse, or other person? Can the device be used by the intended population? Might the user have sensory, cognitive, or physical limitations? For example, is the device designed in such a way that someone with dexterity issues can use it properly without having to ask for help? Can they hear audible feedback from the device and see the screen well enough?
- The Use Environment(s) - Just as important as the user, the use environment must be strongly considered. Is the device stationary or mobile? Whether in the home or on the go, it is not possible to rely on proper power outlets and thus would require an alternative power source (e.g., batteries). If alarms are part of the device, manufacturers must ensure that they can “be heard in uncontrolled noise environments typically found in the home." Also consider whether the device will be permanently attached to the user in both private and public settings. How will the user bathe? Will it be comfortable when he or she is sleeping? Should it be made as inconspicuous as possible? How will he or she get through security scanners? The use environment(s) also will play heavily into design considerations as it relates to durability, potential exposure to the elements, and whether or not it can safely function in the expected conditions.
- Device Considerations - Multiple guidance documents exist to aid in the design process, offering a mix of regulations and advice. Manufacturers will find considerations related to such things as:
Training - Device training can take many forms: instructions for use, device labels, information sheets, and formal training, among others. Keep in mind that some users may not be able to understand multiple steps or a long list of warnings and precautions. Additionally, manufacturers should weigh the importance of training users in addition to providing written instructions. Tutorials built into device software can be helpful in cases where the device is used infrequently or the user may be slightly impaired (e.g., low sugar clouding mental judgment).
Post Market - What kind of support will be provided to the user once he or she is home using the device? Will 24-hour customer support be available? What happens if the device malfunctions or breaks? Users need to understand their options and what to do if they run into problems.
- Design controls and software;
- Learnability and intuitiveness;
- Sound;Labeling; and
The standards and guidance documents can assist manufacturers in developing safe, usable products. However, home-use devices still require extensive testing with the intended users—in simulated environments and later in actual home-use environments—to ensure their usability, understandability, and safety. If manufacturers consider the human factors that are specific to home-use devices, then they, their users, and the healthcare system as a whole will realize the benefits.
In my medium-term professional life of nearly 20 years, I have experienced the rolling hills of good and bad workplace culture. In many companies the term “culture” is often thrown about by HR for hokey team-building exercises, but is rarely well defined. It is one of those emotional terms people instinctively feel is either good or bad. Culture is a complicated equation with multiple input variables like diversity, financial footing, and ideology that hopefully outputs success. The most positive and successful company cultures have the following high-level attributes.
An open atmosphere that encourages creative thought.
In the consulting world, we live and die by our creativity. The freedom to express an idea in a group brainstorming session is critical. Even a bad idea (I’ve had more than a couple) has the potential to inspire someone else into a winning concept. When employees are encouraged to express themselves, free of a creative dictator who bullies meetings, the best ideas quickly rise to the top because nothing is held back.
A diverse group of people who can look at a challenge from multiple angles.
Diversity obviously plays a key role in creativity and culture. The best ideas come from people with the farthest reaching experiences. Staffing people from different corporate, social, and interest backgrounds produces the most diverse concepts because the pool of knowledge is that much bigger. A great company culture needs a broad spectrum of viewpoints to both develop great products and provide an interesting workplace.
Cross-functional teams uninhibited by psychological or physical walls.
Once the key people are in place, the internal teams need to work together. This is probably the single largest hurdle to positive corporate culture in large organizations. I can swing a hammer with the best of them, but milling a precision slip fit is a totally different issue. I need to know where to go and who to see for the optimal solutions to my problems. Physical barriers like separate buildings or even high cubical walls often compartmentalize people and unintentionally nudge employees into “defending their space” instead of valuing the people they work with. An openness and mutual cooperation can naturally develop when groups of talents (like engineers and designers) are mixed in one location and interact face to face on a daily basis. Innovation does not stop at boundaries, it overcomes them.
A means for a group and individuals to control productivity.
An often underestimated catalyst to a positive company culture is a department budget. Speaking as an engineer, we need tools to most efficiently do our work. A budget we can spend as we see fit (free of multiple approval signatures) is wonderfully liberating. It improves productivity by allowing employees to purchase tools they are excited to learn while making their jobs easier. Shiny new toys also have the added benefit of keeping people engaged while at the same time improving the company’s overall capability.
An internal quality/regulatory system that puts the idea first.
A positive culture is also fostered by an internal development system free of creative impediments. There is nothing more inspiration killing than the thought of forms, signatures, and cross-functional team approvals that need to be navigated before pencil can even be put to paper. Without question, standards and internal quality requirements are critical to manufacturing safe and effective products, but at the knife’s edge of concept creation there should be total freedom. That is by far one of the best parts of working for a consultancy. Our internal procedures were created around the framework of unimpeded development. Creativity comes first.
Contrary to the musings of academics, there is no set path to a utopian society. Human beings are wonderfully unique in thought, passions, and personality, but there are a few simple building blocks companies can put into place to foster a positive corporate culture. Leveraging the unique qualities of individuals to work together toward a common goal can be achieved when they are motivated, feel they have some control of their destiny, and are free to express themselves.
The words “Design of Experiments (DOE)” and “Taguchi” usually conjure up images of cell phones coming off the manufacturing line while someone inspects them for defects. The intended application for DOE is in the manufacturing world, so it’s easy to see why the product design community often neglects these methodologies. However, my experience implementing DOE into product design shows otherwise. Using screening designs, where applicable, can save design firms (and the harried engineers doing endless testing) time, money, and resources.
A DOE is loosely defined as purposeful change to the inputs of a process to observe change in the outputs. Whether we choose a fractional factorial or a full factorial does not change the fact that we are using DOE, unless we are picking a method without being mindful of the advantages and disadvantages of our selection. The image below illustrates the relationship among DOE (also referred to as Experimental Design), screening designs, and the Taguchi method. The Taguchi method is a type of screening design, which is a type of fractional factorial, which falls under the umbrella of DOE along with full factorial.
Becoming an expert in Design of Experiments often requires entire courses, poring over textbooks devoted to the subject, and spending hundreds of hours on real-world experimental design and analyses. A little bit of linear algebra helps as well. The advantage of Taguchi screening designs is that they provide a taste of the subject without requiring a degree in statistics. Engineers can access standard tables and available software, and become “experts” in screening designs over the course of a few hours.
Types of projects in the product design world that might be good candidates for a screening design include:
- A product that is not meeting expected performance results
- A large number of concepts that need to be narrowed down to a few
Popular Experimental Design Methods
How many of us have gone into the lab only to return with more questions than answers? Mechanical engineers like me, who are not usually trained in statistical analysis, tend to employ one of two experimental methods: full factorial or one-variable-at-a-time. Full factorial involves testing every combination of variables in a separate test configuration. The output is a model that theoretically describes the behavior of the system. How often do clients ask for a complete mathematical model of their system? More often than not, the client just wants to make it work. One-variable-at-a-time involves performing one test, looking at the results, and then designing the next test based off those results. The idea is that each test will point you in the direction of the optimized solution. However, this type of testing can often lead to numerous tests with conflicting information and no quantitative numbers to back up observations. A screening design provides the best of both of these methods; it points to the optimal solution with minimal test setups. If a more precise output is preferred, a full factorial can be run with more thoughtful, and ideally less, main factors.
Overview of Screening Designs
Screening designs introduced by Genichi Taguchi replace traditional full factorial configurations with a fraction of the tests by sacrificing information about interactions among main factors. Usually consisting of eight, 16, or 27 separate orthogonal configurations, depending on the number of design variables, screening designs point to the optimal discrete setting of each main factor tested. This means that screening designs largely neglect interactions among main factors. For example, if you were to run an experiment testing seven variables at max./min. values, for a full factorial you would run 128 different configurations. A Taguchi screening design calls for eight. If the screening design eliminates two factors from being important, your full factorial count suddenly drops to 32, with just the addition of eight preliminary tests.
In one case we implemented a Taguchi screening design to simultaneously reduce testing time and improve performance in an environment with a high degree of noise. Noise is a factor that affects performance but that we have little control over, like swings in ambient temperature, machining tolerances, or in this case, differences in technique from surgeon to surgeon. Our client had a bench-top surgical device that was operating well in the lab but had sporadic performance in the surgical suite.
We identified five main factors on the device we were able to change (design variables) and two noise variables that accounted for most of the variation we observed. We were able to pare down four of the factor options to two or three levels: either a max./min. for continuous factors or a discrete setting for qualitative variables. Our qualitative variables were three swappable blade geometries, each coming in two different sizes. Because none of these can be interpolated between, they are defined as discrete.
With a full understanding of the design variables and noise variables, we debated doing a full factorial versus a Taguchi screening design. A full factorial would have required 2 x 2 x 2 x 2 x 3 = 48 separate test setups neglecting noise and 48 x 2 (levels for noise variable one) x 2 (levels for noise variable two) = 192 separate test setups with noise included (total testing time would be about 28 days). A Taguchi L18 required 18 test setups x 4 runs for each setup to account for noise = 72 tests (total time would be about 5 days). This means that we only perform the test setup, the most time-consuming portion of the test, 18 times, and run four samples through each setup. The Taguchi L18 was the clear winner because of the balance of information about noise and time savings. The Taguchi method would not have been a good choice if our output, system performance, was a nominal target rather than a minimizing or maximizing target, or if the factors had not been independent from one another.
After running 18 separate configurations, the output is an indication of what factor settings lead to an optimized system performance. Below is a commonly used chart called a Main Effects Plot that compares each design variable with the overall mean system performance.
This plot indicates that in order to maximize system performance, shown on the vertical axis, the optimized design variable levels are high speed, high mass, elliptical blade geometry, and small blade size. Since the slope of temperature is shallow, I can eliminate it as a major contributor to system performance. Using my optimized design variable settings, I plug the system performance for each into the expected performance equation. This equation predicts the output, which in this case is system performance, when the design variables are set to the desired levels. The expected performance equation indicates that I can expect an increase in system performance from 62% to 73%.
The last step is to run confirmatory tests. Now that I have a number for expected performance, it is necessary (and just common sense!) to prove that the optimized variables will get close to this number. The number of tests is up to your own discretion; I usually run enough so that I’m confident in the result. If the confirmatory tests do not match the expected performance number, the issue is probably either that main factors or noise variables are not accounted for, or that there is a high degree of interaction among main factors.
If more data is required to further optimize results, a full factorial or other type of design could be run with more efficiency due to the knowledge gained from the screening design. In this case, most of the variable settings were either linear or discrete, which means I can rely on the screening design data to choose the optimized variable setting. If I suspected one of the variables was nonlinear, like if the speed variable actually looked like a quadratic function, I would want to describe a more complete model to be sure that my results reflect the actual performance. I might add levels to speed and test against the two successful blade geometries to get a more finely tuned model. This means that my original full factorial experiment is reduced from 192 tests to 3 (levels of speed) x 2 (blade geometries) x 2 (levels of noise variable 1) x 2 (levels of noise variable 2) = 24 tests.
Taguchi or not, the key to getting meaningful data out of testing is the old standby: measure twice, cut once. It’s tempting to sprint into the lab, throw something together, and expect things to go a certain way. What usually happens is that you end up wasting half a day figuring out that you need better controls when you could have been spending that time carefully planning out a test plan. What variables do I really care about? What do I want to measure? What is my contingency plan if I don’t get meaningful data? Better yet, once you get a meaningful plan together, get the client to sign off on it so that everyone is on board in terms of expectations, timelines, and program risks. Although screening designs may not be suitable for all situations, when they work they save a huge amount of time, money, and resources.
Engineering Statistics Handbook. Ed. Carroll Croarkin and Paul Tobias. 1 June 2003. National Institute of Standards and Technology Information Technology Laboratory SEMATECH. 10 December 2012 .
Stephen R. Schmidt and Robert G. Launsby. 2005. Understanding Industrial Designed Experiments (4th Ed.). Air Academy Press, Colorado Springs, CO, USA.
Human factors engineering as applied to the design of medical devices has never been as important as it is today, especially since the release of the U.S. FDA’s draft guidance document Applying Human Factors and Usability Engineering to Optimize Medical Device Design in June 2011. With the imminent rise of mobile health applications (apps), human factors engineering principles will become even more vital to the success of this emerging industry and to the safety of the patients for whom they are designed.
Numerous factors have contributed to the recent explosion of mobile health applications and remote patient monitoring, creating a perfect storm of opportunity for this sector of the medical industry. Economic trends, such as the push for cost reduction and new regulations mandating disincentives or penalties for the readmission of Medicare patients into hospitals within a certain period of time, are pushing healthcare providers toward a more vested interest in keeping patients at home. Technology trends, such as the migration to electronic health records, the advancement of digital health applications, cloud computing, the widespread use of social media and the ubiquity of mobile devices, play a huge role. The ability to self-monitor and keep a diary of health issues through the use of mobile apps is strengthening relationships between healthcare providers and their patients.
Consider the following points:
- By 2015, 500 million smartphone users are expected to be using medical apps1, according to Research2Guidance, a global mobile research group;
- The market for mobile health apps is expected to quadruple to $400 million by 20162, according to ABI Research;
- Over three million patients will be monitored over cellular networks by 20163; and
- Three of every four dollars spent on U.S. healthcare is for chronic diseases, and family caregivers are estimated to provide 80% of all long-term care for chronic diseases.3
FDA to start regulating mobile health/medical apps
In July 2011, the FDA released a draft guidance document on mobile medical applications “to inform manufacturers, distributors, and other entities about how the FDA intends to apply its regulatory authorities to select software applications intended for use on mobile platforms.” 4
Since the release of this document, there has been movement within the app development industry to understand and anticipate exactly which mobile applications will require FDA approval. The guidance indicates that the following types of mobile applications would be subject to regulatory processes:
- Software applications that can be executed on a mobile platform, or Web-based software applications that are tailored to a mobile platform but are executed on a server; and
- Software applications that have an intended use within the scope of the concept of medical device as regulated by the FDA, and:
- Are used as an accessory to a regulated medical device (for example, an app that connects to a medical device for the purposes of controlling the device in some way); or
- Transform a mobile platform into a regulated medical device (for example, an app that remotely monitors patient vital signs).
(Note that according to ANSI/AAMI HE75:2009, a mobile medical device is not limited to only mobile phones and tablets, but any device that can be mobile, whether by carrying or rolling.)
You may notice that the guidance does not apply to mobile apps intended to analyze, process or interpret medical data; the FDA has indicated that it will address these types of mobile applications in a separate guidance document. However, the important takeaway is that the FDA will soon be releasing legally enforceable guidelines that will apply to a plethora of medical and health apps already on the market and many more under development. In fact, a bill set to be introduced in the U.S. House of Representatives called the Healthcare Innovation and Marketplace Technologies Act (HIMTA) proposes to establish an Office of Mobile Health at the FDA specifically to provide recommendations on mobile health issues and create a support program to help developers navigate HIPAA privacy regulations.
Importance of applying human factors to mobile apps
The implementation of human factors engineering throughout the design process will be critical for emerging mobile health applications, not only because the FDA is exerting its responsibility to protect and promote public health by regulating these new mobile medical devices, but because it’s good practice and is an essential tool for decreasing patient safety risks while increasing usability and effectiveness.
Take it from someone who has already been through the process. An article written by Brian Dolan for mobihealthnews in May 2011 describes a panel that he moderated with several mobile health app companies who have already navigated the FDA’s 510(k) process successfully. In the article, WellDoc Founder/CEO Ryan Sysko is quoted as saying that if he could change one part of the process, he “would have the FDA provide greater clarity around what successful human factors testing looked like.”5
Our research and usability team at Farm knows the detrimental consequences of failing to apply human factors engineering to product development efforts. We have helped many clients whose medical devices have been rejected by the FDA for lack of necessary or appropriate human factors evaluations. As we always remind our clients, human factors is not a one-time testing event that occurs at the end of the development cycle, but rather an ongoing iterative approach that starts at the very beginning.
The importance of implementing an iterative approach in the design of mobile medical apps is as relevant as applying the process to physical devices. As the more savvy companies have learned, a robust process starts with the gathering of user requirements and includes preference testing of multiple design concepts, design verification, which could include several rounds of formative usability testing of the product itself and related documentation, and a final summative validation test that proves the successful mitigation of use-related safety risks. As is the case with physical medical devices, mobile medical app developers will be expected to follow the user-centered design guidelines of the international standard IEC 62366: 2007.
During the development process, mobile app designers should turn to established human factors guidelines, particularly those set forth in the ANSI/AAMI HE75:2009. Below are some examples from HE75 that could apply to mobile medical devices and/or apps.
- Carefully analyze the conditions under which the mobile device is going to be used (for example, when a user is moving or being moved, in moving vehicles, while wearing the device or during stationary use, on a rack, above the head, etc.).
- The display on the mobile device should not be obstructed by additional accessories, wires or devices.
- Auditory indicators can be used to supplement visual indicators and should provide the ability to adjust volume, on/off and native language (when feasible).
- When possible, aim to work with existing technologies that already have protocols in place that work with medical industry standards, such as the IEEE 802.11 series of standards for LANs, Bluetooth and cell phone protocols.
- Carefully analyze the conditions under which the mobile device is going to be used and how detrimental it is when the battery runs low.
- Keep important tasks immediately identifiable.
- Ensure that the design takes into account the small size of the screen, limiting the amount of images and text.
- Remain consistent; place information in the same place over a series of screens.
- Offer more than one way to navigate through the system.
- Provide guidance such as prompts or pop-ups when applicable.6
Below are some mobile app design best practices published by the mHIMSS Design Tenet Workgroup in January 2012.
- Eighty percent of screen real estate should be dedicated to data; twenty percent to interface.
- For readability, a single sans-serif typeface and up to six type treatments for the typeface are used.
- Color is used sparingly and helps the information, the interaction and the user experience accomplish an apps’s intended purpose.
- The app displays controls in a progressive manner, only the ones needed at specific points along the intended workflow.
- The app works within mobile device limitations such as: no hover text feature, larger target size and smaller display.
- The app leverages new capabilities, such as touch-based interactions, location awareness, proximity sensitivity, integrated communications and push notifications.7
The rise of the mobile health industry is underway and offers an outstanding opportunity to revolutionize healthcare. In September it was announced that the FCC would act on key recommendations from its mHealth Task Force to adopt wireless health technology8.
In order for mobile health application developers to be successful, they must create safe, easy- to-use products that can pass the rigorous FDA approval process. The critical path to this success begins by stringently applying the principles of human factors engineering. Drawing a reference from the Hippocratic Oath, the ultimate goal for designers is to first do no harm, and then do everything possible to provide the best possible product experience for the patient. The only way to do this is to involve end users in the design process from start to finish.
- Lawmaker Pitches New FDA Office of Mobile Health, Jenny Gold, Kaiser Health News, September 26, 2012.
- 5 Ways Mobile Apps Will Transform Healthcare, Derek Newell, Forbes.com, June 4, 2012.
- Webinar: The Inevitable Imminent Rise of Remote Patient Monitoring, MobiHealthNews, September 2012.
- Draft Guidance for Industry and Food and Drug Administration Staff: Mobile Medical Applications, U.S. Department of Health and Human Services Food and Drug Administration, Center for Devices and Radiological Health, and Center for Biologics Evaluation and Research, July 21, 2011.
- Lessons Learned from FDA Cleared Mobile Health Companies, Brian Dolan, MobiHealthNews, May 5, 2011.
- ANSI/AAMI HE75:2009 Human Factors Engineering – Design of Medical Devices, American National Standard, October 21, 2009.
- Selecting a Mobile App: Evaluating the Usability of Medical Applications, mHIMSS App Usability Work Group, July 2012.
- Fact Sheet – mHealth Task Force Recommendations, Federal Communications Commission, September 24, 2012.