I recently started my ninth year as a mentor for the Lawrence MA “Gearheadz” High School FIRST robotics team. FIRST is a non-profit program that serves to inspire young people's interest and participation in science and technology through robotic competition. Over the years it has occurred to me how this experience very much parallels the “real-world” of product development. Both have to deal with challenges that present themselves at the start of any new project, like cost, schedule, and resources. In both cases, there is never enough time or budget. Engineers, admit it: When have you felt that, at the end of a project, you had plenty of both? And from a people side, we all have to learn to work efficiently with team members of varying skill sets and levels, including those who may not yet have developed the necessary skills to deal with a project’s challenges. In this post, I present some of the commonalities I have observed between students in an inner-city high school robotics club and professional designers and engineers who constantly learn new methods, get inspired to solve the hard problems, and ultimately develop a useful device.
Product development firms, like Farm, must constantly generate creative solutions by relying on two fundamental elements: 1) a sturdy yet flexible foundation of a development process, and 2) creative individuals. Process, to a large degree, is a map of those common steps required to complete the journey from problem to solution, or in Farm’s case, from the seed of an idea to the implemented end product. Farm’s process is based on these elements:
- Strategy (e.g., definition of user, marketing, and business needs)
- Specifications and planning (e.g., definition of design inputs and general planning)
- Development (e.g., generation of product concepts through prototyping)
- Verification and validation (e.g., testing to ensure the device does what it’s supposed to do)
- Transfer into production
Farm’s process has played a critical role in all sorts of successful medical device projects, from drug delivery devices to surgical robots to wearable casts.
As for the second critical element: The creative individual. At Farm, we look for a balance of many attributes in our technical staff; he/she is open-minded, an out-of-box thinker, internally motivated, intellectually curious, knowledgeable in multiple manufacturing processes, can conceptualize appropriate-to-the-market concepts, versatile in typical design tools (e.g., CAD), and a team thinker (ability to work on abstract concepts in a group setting). There is no ideal proportion of these skillsets, as each individual brings different backgrounds and personal strengths.
This is a great place to start, but like a good spaghetti sauce base, these elements are only the core ingredients. We still need to add more flavors, a bit of personality, and enough time to bring it all together. To stay sharp and grow, we all need to stretch ourselves. This is made easier when the company culture encourages the synergizing of personal and company interests by sponsoring training events that range from internally-developed courses on Design for Experiments to Web-based courses on Geometric Dimensioning and Tolerancing to intellectual property licensing exams.
And to foster a positive work atmosphere, organizations should not forget to include the fun stuff like group cook-outs, biking treks through the woods, or ultimate Frisbee games, as these social events contribute tremendously to a cooperative environment, which is key for the long-term health of an organization. This is the building a team part... it takes time for people to gel and get into a groove.
So how does this relate to a FIRST high school robotics club?
Let’s first consider the process. The odds are that a process-starved professional organization will eventually fester into true anarchy (and ultimately bankruptcy!); the only question is when. Similarly in a high school club, any teacher will agree that should you dare to combine the youthful energy, immaturity, and hormonal “issues” of teenagers with no process, then an even more elevated level of anarchy will manifest itself and at a much more accelerated rate.
Drawing from my experiences, I’d like to leave you with a few takeaways, which are observations that apply to both the FIRST high school robotics program and the “real world” of product development. Recognizing these similarities should help bring home the pertinence of the robotics program for today’s youth as well as illustrate how professionals can gain valuable usable experience by contributing to such programs.
Takeaway #1: Mentoring kids offers a vehicle for hyper-fast-paced training for a professional by: 1) evaluating how robust your process is (describing a process to a teenager is usually harder than to a professional peer or direct report), and 2) sharpening your saw on how to recognize potential weaknesses for additional training. This could be considered a kind of “HALT (Highly Accelerated Life Test) for managers” training program. To be sure you’ll see the defects right away in your management techniques and abilities. (And you’ll have to fix them without delay to keep things rolling.)
Takeaway #2: Tossing these high school kids into such new spaces that they are not yet fully prepared for can give them wonderful exposure to the dynamics of working in a real-world group, including having to learn technical and team-related skills “on the job.” Though on a different scale, young professionals also experience this perspective, so make a point to nudge your less experienced staff into higher-level tasks from time to time. It’s okay to try something, be challenged, and struggle a little as it is all a part of the learning process!
Now let’s consider the “staff”. In a professional setting, you can usually count on the trainee to already have a sound background through some combination of formal education and/or experience. Not so in the case of the high school robotics club. In most cases for the Gearheadz, we need to teach fundamental things like how to drill a hole (safely and relatively precisely) long before we address the more complicated aspects of conceptualizing or the fundamentals of mechanical and electronics elements and programming. The process is important and it takes time. Eventually, meeting on multiple evenings each week and on Saturdays during the six-week winter build season, the kids all experience how to turn a problem statement (the rules of the game that change from year to year) into concepts, and ultimately into a 100 pound, four-foot-tall remote controlled robot that competes with and against five other robots developed by similar teams from all over the world.
Takeaway #3: Training needs to be delivered in a timely fashion and at an appropriate level. Especially true when the student is taking his/her first steps, but this also applies at any experience level. None of us like it when the subject matter is presented at too high or too low a level. Knowing your audience (and being flexible) is the mantra for anyone who teaches or makes presentations.
In the end, if we all don't keep focused on the need to constantly train the less experienced as well as ourselves, then we take the first step towards their, as well as our, own obsolescence.
And what about the fun stuff? “Fun” and "success" can have many different definitions, but they’re not necessarily mutually exclusive. For us Gearheadz mentors, we watch for the blood, sweat, and tears that every student puts into each robot every year and how it can positively impact lives. One particularly memorable moment for me was that shriek of excitement from a freshman after drilling her first hole using the drill press. The student was later voted president of the robotics club as a senior and is now a senior robotics major at Worcester Polytechnic Institute. Another success is the quiet kid with struggling grades who eventually raised his grades to the team’s standards and morphed into a leader and a skilled robot driver. Still another is last year's co-captain, who joined the team as a junior with no intention of majoring in a science, but is now an engineering major in college.
Takeaway #4: Though it is still called work, keep it fun. You don’t need to be a teenager to admit that life is too short to not have fun!
In conclusion, there is no longer any question that America needs to bring its A-game to escalating the understanding of the sciences by today's students from kindergarten through college. Unfortunately, too many finger this as a problem for the schools to address. However, there are many opportunities for people to pitch in to help solve the problem, to expose kids to this fascinating world of technology, to permit a kid to practice leadership as part of a team, to further motivate the kid who already has that sparkle of interest in his/her eyes, or to draw the aimless kid away from the seemingly inevitable dead-end choice. FIRST is one of these programs.
And we as professionals get to sharpen our “work” skills in the mean time!
A critical indicator of progress during the early stages of product development is the pace at which new information is created; such information is simultaneously the foundation upon which new concepts are built and the lens through which current concepts are viewed.
So how best to create new information?
Experience helps; all things being equal, having done something before encourages success the next time around. Beyond that it’s prototype, evaluate, repeat. Quickly.
Prototyping isn’t the bottleneck; between rapid prototyping and traditional shop work, concepts can be realized almost as quickly as designers and engineers can think them up. The limiting factor is evaluation, particularly trying to quantify the fuzzy front end.
Evaluation in the early stages typically focuses on low-fidelity prototypes, demonstrations by expert users, videos of actual procedures, and benchmarking of competitor products. Much of the information gathered at this point is necessarily qualitative. Still, some quantifiable questions remain: how much deviation is there in the angle at which different docs hold a laparascopic device? What is the path of personnel during a procedure? How does the movement of someone wearing a brace compare to healthy person?
Questions such as these can be answered (some with surprising ease) using computer vision (CV). All that is required is a camera to capture the images/videos and a tool such as MATLAB or Python to analyze it.
I’m going to share a simple example of CV in just a moment, but first a word of caution: some circumstances lend themselves to CV more than others. Specifically, straightforward CV projects will share the following characteristics:
- Measurements of interest are 2D (3D requires stereoscopic vision)
- In a controlled environment (i.e. consistent lighting, plain background)
- Low precision (high precision requires more fine-tuning of the code, camera resolution is also a factor)
In the example below, a Post-It is placed near the ankle, knee, and thigh. From this modest setup, angle data is extracted during a portion of gait.
Setup time: 20 min.Coding time: 2 hours (without reusing code)
The great thing about computer vision as a development tool is that it’s expandable. Once you can perform a simple task such as the one I’ve shown here, it’s a short step to more advanced topics like real-time vision and skin detection. Once you start adding these tools like these to your tool belt, you’ll be surprised at the uses you find for them.
What is the most valuable engineering resource Farm has at its disposal? From a cost/benefit perspective, it’s hard to argue against the co-op student. The benefits are straightforward: a company identifies talent while utilizing low-cost labor and a student gains first-person knowledge of an industry and its practices. Farm currently utilizes two simultaneous co-ops in six-month employment cycles, and being a former co-op student myself (now an engineer at Farm), I feel compelled to share my thoughts on how both companies and co-ops can benefit most from the cooperative educational experience. While my previous co-op experiences were generally positive, I have also observed a number of negative experiences in the past. Here are some recommendations based upon traits that commonly create positive co-ops, from both an employer and student’s perspective.
For Co-op Employers
Have Work on Day One
Co-ops generally arrive to work on their first day with their motivation and ability at inversely proportional levels. They want to help, but they don’t know how. Having a task from the get-go is extremely important as it creates an immediate sense of value and worth. This is a challenge for many product development firms because co-op-appropriate work doesn’t always exist. In this situation, create work. If a co-op lacks CAD skills, create a design problem that requires grasping complex surfacing or master models. Try not to rely on stock tutorials and lessons as most students have already completed these. A unique and well-defined task can go a long way toward creating lasting co-op motivation.
Understand a Co-op’s Goals Before and After Hire
While interviewing a co-op, most companies will inquire about what a student wants to get out of the experience. Some will answer honestly, some will tell the interviewer what they want to hear, and some simply don’t know. Regardless, it is important to continue this dialogue after the student is hired and throughout the experience. Interests and aspirations change rapidly for co-ops, and showing that you value their goals is essential.
Throw Them into the Fire
With a good co-op student, a product development firm can take the burden off of its full-time employees and rely on the co-op to complete billable tasks. This inherently requires a certain level of trust; trusting the co-op to complete deliverable tasks at a high level. Because of this, many firms wait to give co-ops responsibility until they have definitively proven themselves over time. Instead, try throwing them into the fire. Treat co-ops like full-time employees while simultaneously checking their work exhaustively. This method might eat into company overhead in the short term, but it’s worth knowing a co-op’s true capabilities as soon as possible.
Create Positive Press
The primary goal for a co-op employer is to get quality work from an otherwise inexperienced employee. While this is a completely valid goal, co-op employers often overlook the goal of creating positive press for their companies. When co-op students return to class, they reflect on their experience. They make presentations, write reports, take surveys, and above all share their reflections. Because of this, a positive co-op experience for one student can completely influence the opinion of hundreds of others. Why not make a positive impression on the engineers and designers of tomorrow?
Understand Your Company
You need to do your homework! A co-op student who researches the ins and outs of a company will get in the game much faster once employed. Prior to employment, co-ops should strive to gain a full understanding of a company’s core competency, clientele, competitors, business practices, past projects, and future goals. It might not be easy to find, but the information is out there, especially if the company is classy enough to have a blog. Possessing this knowledge allows co-ops to understand where they can fit in and allows them to suggest how they might be most valuable to a company. It isn’t enough to follow instructions. Good co-ops should assume they are being underestimated, and use company knowledge to ensure that they are utilized as effectively as possible.
Know How to Learn
Learning at a product development firm is stressful. For co-ops, balancing personal educational goals with the hectic demands of contract product development can seem borderline unachievable. Fortunately, product development firms are among the most efficient educational outlets out there, provided you know what you’re getting into. Here’s what to expect:
When billable work doesn’t exist, co-ops are often used for company initiatives or are left to their own devices to complete lessons and tutorials. This is the best time to find work that benefits both co-op and company. For engineers, learning a relevant programming language, becoming familiar with GD&T, developing machine shop skills, and becoming acquainted with new manufacturing techniques are all examples of valuable ways to spend unbillable time. Try to focus on subjects that will be applicable to upcoming projects.
Every time I’m forced to learn something new for a project, Bill O’Reilly’s voice pops into my head yelling “We’ll do it live!” This is where the most efficient learning happens. You’re forced to master and execute a skill in a very short timeframe, and getting it wrong isn’t an option. You pull information from coworkers, past projects, and outside contractors simultaneously. It isn’t ideal from a low-stress point of view, but from an efficiency perspective, there’s nothing better. As a co-op, being ready and eager for this type of education is indispensable.
Show Off Your Skills
A major source of co-op frustration arises when students aren’t given challenging work. The cause is understandable from both perspectives; co-ops want their work to teach them new skills, but employers don’t want to give important tasks to inexperienced workers. Co-op skill also varies widely from student to student, so employers are often conservative with the work they delegate. To solve this problem, co-ops should try to advertise their capabilities as much as possible. A company-wide “About Me” presentation or public portfolio that outlines previous projects and experience can be extremely beneficial for both co-ops and employers. Additionally, “About Me” presentations should be used to express relevant educational goals and interests. It seems obvious, but many co-ops will complete entire internships without ever expressing their skills and goals.
A good co-op experience has the potential to produce both a valuable education for a student and a budget surplus for an employer, a win-win scenario. Unfortunately, co-ops are often underutilized in the contract product development world. Deadline-driven projects necessitate instant results, and project managers just don’t have the resources to bring co-ops up to speed. Instead, co-ops sit idle and the reciprocal relationship is broken. It’s a shame, because when the co-op experience is successful, the long term advantages can be immense. Budgets are balanced, talent is identified, and knowledge is gained. A quality co-op experience is something to strive for by both students and employers, reflected by the fact that almost all of Farms entry-level engineering hires are previous co-ops. It’s evidence that a well-executed co-op program benefits everyone.
Lee PaneckiMechanical Engineer at FarmNortheastern University Alum
Interested in being a Farm Co-op? Visit Farm's Career Page!
In this enlightened age of “intelligent engineering”, data and the interpretation of data have become critical to the success of a project. The development tools in our high-tech toolbox are always expanding and bringing new insight to increasingly complex systems. But without a detailed product specification roadmap, we fall into the trap of generating oceans of useless data. The design team of today needs to be keenly adept at gathering, filtering, and utilizing data for the best possible patient outcome as well as commercial success.
Engineers love data. We always want more high-speed cameras from more angles to record more prototype scenarios to verify more simulations. This brings us to one of a program manager’s greatest frustrations: usable data. Whether you are designing an autonomous surgical robot, a patient-specific knee implant, or a paperweight for all the new regulations, volumes of data are useless without the proper specification framework to filter and process it. This is the role of the Product Requirement Document (PRD).
The PRD is designed to keep the development team on task and focused on specific product functionality. If the PRD is written at the end or evolves during the development program, truth-seeking engineers will generate ever increasing volumes of data. We do this to provide program decision makers a broad spectrum of options with varying degrees of feasibility. Working without a PRD is absolutely the slowest and most expensive way to develop a product. In a perfect world, every program would start with a detailed PRD backed by historical benchmarking data, but this is not an option for truly innovative technology looking for a production platform. Even without a PRD, the development team will still need critical, effective processes to reach definitive milestones and eventually a market-conquering product. Without hard stops during the process to critically analyze data, you will get stuck in never-ending loops of data gathering and end up creating more variables than answers.
Because development resources are finite, brainstormed concepts need to be vetted quickly and either fail fast or prove promising and carried further. If hanging weights off of a rapid prototype strapped to a car bumper gets you pertinent data in hours as opposed to machining prototypes and a detailed test fixture over four weeks, go with the car bumper. The most challenging aspect is to define the critical variables and harmonize them into a succinct goal. A great development team finds the facts in the figures and keeps the thought process moving forward quickly and efficiently.
One of the most powerful tools to reduce and accelerate prototyping loops is finite element analysis (FEA). FEA has made giant leaps forward in capability over the past 15 years. Large assemblies are now translated directly from any CAD environment, contact pairs are automatically assigned, and a mesh is created after the click of one button. It is the perfect tool to fail fast in both positive and negative ways. The results are shown plain as a rainbow on the computer screen, but are they fact? The quickest way to further a concept with FEA is through comparative analysis. If option A predicts X stress and option B is 1/2X stress with the exact same setup, you are moving in the right direction. The question then becomes are the physical stress levels at X or 10X.
There are many ways to validate FEA simulations in the real world. With the ever decreasing cost of 3D printers, you can be up and testing concepts in hours. The 3D printing materials will not be exactly representative of production materials (build layers, for example, are ideal crack slip planes), but remember that stress is force/area. Whether a part is printed or machined, both will have the same high-stress location when tested in identical setups just different ultimate loads and displacements. The issue arises when your prototype fails in ways you did not predict or expect. The failure load and location are in essence only one data point in a complex system. Would thinning out a section and increasing flexibility actually be more beneficial than thickening and stiffening a part? Does adding a gusset to support a corner relocate the high stresses to an even more critical location? To fully comprehend the behavior of a part or assembly, you need multiple data points within the system to trend or predict how your concept is performing as a whole.
Strain gages provide very enlightening data for validating FEA simulations. I have found over the years that many people either don’t understand or are afraid of using strain gages. The key is to confirm your gage application and data gathering process on something simple, like a bending beam, to start with. Perform the hand calculations, confirm the calculations with an FEA run, and match your physical gage readings to the predicted results. Once you validate your process, a whole new array of data gathering is available to physically map the performance of your concept. Strain gages have been immensely useful for sorting out FEA constraint issues and uncovering inappropriate setup assumptions. It is rare in the real world that anything is truly fully fixed. The development team will still need to iterate between the FEA setup/results and gage readings, but at least they will have a physical data path to correlation. It takes an experienced technical lead to understand when the analysis and physical testing are aligned well enough to predict the trend for your specific functional requirement. Strain gages are cheap, apply them liberally.
The desire to compress timeframes through the “intelligent engineering process” has become a reality across multiple industries. Development teams of today have the most incredible physical and virtual tools at their disposal, but they need to know how to most efficiently utilize them. Even without a PRD, data still needs to be gathered, sorted, and acted upon to develop and validate a concept. The right data in the right hands at the right time does make all the difference. Budgets, personnel, and time to market have all been significantly reduced, while at the same time product expectations have never been higher.
The 2012 Symposium on Human Factors and Ergonomics in Health Care took place March 12-14 in Baltimore Maryland. There were nearly 400 human factors professionals, manufacturers, healthcare providers, policy makers, and other stakeholders there, including many of my friends and a number of Farm’s clients.
On the final day, Ron Kaye and Quynh Nhu Nguyen from the FDA’s Human Factors Group presented the Closing Plenary Session: FDA Human Factors Q&A. The chair and moderator was Anthony D. Andre, who organized the symposium. Previously during the conference, the audience had submitted questions for consideration, and the ones below were selected to be answered by the Agency. I’ve done my best to paraphrase based on my notes.
- Q)When do you expect the FDA draft human factors guidance document to be finalized?
- A)The guidance is expected to be finalized by the end of this year. The human factors team is going through 600 good comments and is making changes.
- Q)What is the average turnaround time on a validation protocol review?
- A)For devices where only CDRH is involved, a minimum of 30 days. For combination drug-device products where both CDRH and CDR are involved, it is estimated at 30-60 days.
- Q)What are the two most common or serious mistakes found in FDA submissions?
- A)No human factors done at all and no tangible link between risk priority and user tasks. “We have had a lot of success but there is still a lot to be done.” Now that the new guidance has been out there the FDA has found that companies are following it and practicing good process.
- Q)Do I have to have a perfect product with zero errors?
- A)The point is to find errors and fix them…The fact is some things just cannot be designed out of the device. If this is the case, the device still could be better than other products on the market, even if it’s not perfect. When a manufacturer claims that it is as safe as possible, sometimes the human factors team will meet with the medical officer and get his opinion. FDA reviewers take his input very seriously and try to make the best possible decision. The final and most difficult question is: Do the benefits outweigh the risks?
- Q)If the new product is clearly better and safer than the legacy (predicate) device, do you get “credit”?
- A)Reviewers look at each submission in isolation, and don’t compare. For example, if one device has 10 serious errors and another one has five, will the FDA accept the one with five? No. The question is: What are the problems and can they be fixed? For some products misuse can cause death, for others misuse might cause minor irritation. This doesn’t mean they will ignore the less risky product, but in reality, limited resources force the Agency to focus on the more dangerous products.
- Q)Can manufacturers request specific reviewers?
- A)Yes, you can include the request in a cover letter and send that along with your submission.
- Q)If there is an existing product on the market and you have made only one component change, do you have to revalidate with the same rigor?
- A)It depends on what the component was. If a case can be made, you can focus the validation testing on just that one component, but it depends on the risk associated with the component in question.
- Q)Are there categories of devices that do not require human factors testing?
- A)If there is no significant user interaction and the device is low risk then maybe. The FDA is working on a list of these devices, which will be published at some point.
- Q)What is the FDA’s approach regarding software and electronic medical records?
- A)This is not being decided by the human factors team; Dr. Patel in the Management Office is leading the effort. We have reviewed stand-alone software applications and evaluated whether or not critical actions are supported well by the UI. It can get complicated, however.
- Q) Do you have to test a kit that includes approved syringes for home use?
- A) It depends on risk analysis and justification for why not. The FDA takes into account user profiles (tremors). Do users have unique aspects affecting the use of syringes?
- Q)How is delay of therapy viewed?
- A)Depends on how long of a delay and clinical relevance of speed. For some products, like AEDs and infusion pumps, a delay is critical. You should evaluate what the delay means clinically.
- Q)For combination devices, should you incorporate anything learned from a Phase Three Clinical Study into your human factors studies?
- A) You should conduct your summative testing first, make changes, and then go into your Phase Three Clinical Study.
- Q)What if you are trying to get a combination device approved but there are ancillary steps like washing your hands or preparing the injection site and you know that patients don’t do it? It is out of the manufacturer’s control.
- A)In general, if these tasks are critical to safe use, you should look at them.
- Q)Can you test a device in phases, for example, test the basic tasks first and then have people use the device for a while and learn the more complex tasks during use, and test those later?
- A)Depends on the training that’s necessary to use the device safely and effectively. We would need to know more about the device you’re referring to in order to answer the question.
These were certainly some interesting questions, and I along with the rest of the audience really appreciated the Agency’s willingness to address them in a public forum. It was a perfect closing to the event.
The quality of a CAD database is directly correlated to its adaptability.
What do I mean by that? If you've designed a device (let's say an orthopedic brace) and want to change something fundamental to the design (e.g., the shape and size of the arm to fit in the brace), an adaptable database will dutifully read your request and update all subsequent surfaces and parts such that your design intent remains intact.
But while all high-quality traditional databases have some degree of flexibility, precious few of them are adaptable.
Consider, for example, the humble office chair. It has some surfaces that attempt to conform to your body, a few mechanisms to adjust height and tilt, several wheels at the base, spokes reaching out the wheels from the column in the center, and perhaps the armrests are adjustable as well. A traditional CAD database would have, by virtue of the techniques used in its development, a certain degree of flexibility. An adaptable database would take things a step further.
Warning: The sections on traditional databases may seem a slightly esoteric if you don't spend much time in CAD. For the big picture, skip ahead to The Value of Adaptable Databases.
Sticking with the chair example, a high-quality traditional database would allow you to adjust the range of motion of the tilt mechanisms, perhaps the number of wheels at the base, the length of the spokes attaching the wheels to the center column, and so on. Generally speaking, any feature of the chair that can be defined by a single number could be changed and updated. Some of the means used to reach such an end:
- Your CAD should be DRY (Don't Repeat Yourself)
- Surface Refs > Edge Refs > Point Refs
- Good dimensioning is as little dimensioning as possible (but not less
- 10 simple features > 1 complex feature
- and so on...
While these will take you a long way, adaptable databases, as mentioned before, go further.
By implementing additional techniques on top of traditional best practices, design intent is able to be so thoroughly baked into an adaptable database that its flexibility is no longer limited to a few discrete parameters. Instead, it's able to read user-specific scan data and adjust the height, length, width, and surface curvature such that the resulting database is now custom-fit to the user.
Consider the following techniques when building an adaptable database. The level of adaptability required will drive to what extent (if at all) such additional techniques will be implemented.
Splines are complex. While the descriptions of lines and arcs are so clear that it's difficult to remember not knowing them, a description of splines will quickly get you into calculus territory. It's no wonder that as you look around you are likely to see yourself surrounded by products constituted of lines and arcs instead of splines. The engineers of those products are telling you something: using splines is hard.
That said, splines are the lifeblood of an adaptable databases. This is especially true in databases that want to adapt to user anatomy: there aren't (m)any pure arcs anywhere on our bodies, and trying to force something is a path fraught with peril.
What is truly important?
When an FEA study reveals a weak spot, it's likely the problem can't be fixed by changing one parameter. In such a case, the traditional approach is to adjust several dimensions in the model to sufficiently mitigate the problem. This works for traditional models, but leaving such a critical area defined implicitly rather than explicitly puts your database at risk when it scales.
The adaptable approach is to rebuild the model such that the weak spot is explicitly defined; ensuring that it does not poke its head out again as the database begins to adapt.
Make it data-driven
In the age of big data we are limited not by the amount of data we have, but by our ability to leverage it; product development is no exception. Anthropometric data, use scenarios, FEA, cost of goods, etc. would all, in an ideal world, drive the CAD database. Instead of such an explicit relationship, information typically makes its way into the CAD database via a more circuitous path:
- Data is obtained
- CAD is adjusted to reflect new data
- Design is prototyped
- Prototype is tested
- Tests produce data
This continues until the test data indicates that the performance of the prototype is acceptable. It's a sound approach, but difficult to scale. Geometry that is explicitly driven by data rather than becoming an iterative approximation of it is bound to be more adaptable.
Once you have built a sufficiently adaptable model, you have the opportunity to take advantage of some high-end capabilities.
Top CAD programs provide the capability (either through additional off-the-shelf extensions or access to an application programming interface) to set up studies that can take a "generic" of your design, read in a 3D scan of an object (e.g., a part of a patient's anatomy), compute various analyses between the “generic” and the scan (e.g., distance between the two at a specific point), then adjust dimensions to optimize those parameters (e.g., minimize the aforementioned distance).
These extensions and APIs are the mechanisms that enable adapt-able databases to adapt.
The value of adaptable databases
You may have already guessed why most databases today aren't adaptable; it’s hard and the benefits aren't significant enough to make the effort worthwhile. Most companies with products of various sizes have relatively few variations (think S, M, L, XL). In this case, adaptability is superfluous.
But what if a competitor was able to offer thousands of sizes and able to clinically prove that his/her device, by virtue of its superior fit, was able to improve performance, patient compliance, and recovery time? What if the cost of such a device was nominally more expensive, but (for the reasons just mentioned) provided greater value?
In such a circumstance, the ability of your CAD model to adapt goes from being a nicety that smooths the ECO process and manufacturing tweaks to a full-blown competitive advantage.
Today, this is the case for high-end implants. Some companies are currently working on such an approach for prosthetics. Forward-looking organizations will keep tabs on this trend, and adapt accordingly.
*This blog post was originally featured on Medical Device Summit's MEDesign blog.
A process-based analysis to better understand why smart infusion pumps have become so problematic. Hint: Design alone is not to blame.
The use of smart infusion pumps has become ubiquitous in many U.S. and international clinical settings due to the tremendous patient benefits these devices offer. Briefly, infusion pumps are medical instruments that deliver medications to a patient’s body in a “controlled, precise, and automated manner” (FDA, 2010, p. 2). There are several key attributes that distinguish smart infusion pumps from their more traditional counterparts. The key benefits of smart infusion pumps include: (1) the ability to incorporate a large medication library into the device, (2) the ability to alert users of potential use errors, and (3) the ability to collect usage data, which can be used to improve work practices.
These benefits, however, do not eliminate use-related risks. Improper use of and/or malfunctioning smart infusion pumps can cause serious adverse health effects and even death. In fact, since 2005 more than 56,000 reports of adverse-related events have been reported and more than 87 product recalls have been conducted (FDA, 2010, p. 3). This is important to understand because 90 percent of hospital patients take advantage of infusion pumps during their stay in the hospital (Brady, 2010).
Are We Identifying the Right Problems?
The dichotomy is clear, but what is not clear is why smart infusion pumps have become so dangerous. Unfortunately, many stakeholders, including the FDA, have been quick to point out inadequacies in infusion pump designs as key obstacles to patient safety. While design flaws have been identified, the real problem is more complex and involves multiple systems and stakeholders. Design alone cannot solve the entire problem. It is important, therefore, that we use a detailed analysis of the infusion pump process to identify more pervasive issues.
Systemic Challenges to Smart Infusion Pump Design
The opportunity for error occurs across multiple steps of the infusion pump process, with many occurring both before and after actual use of the smart infusion pump. Therefore, the infusion pump process will be analyzed according to six key areas to better understand why certain problems have persisted. Key areas include: (1) policy, (2) prescribing and ordering, (3) medication storage, (4) medication preparation, (5) administering, and (6) monitoring.
Policy One of the biggest challenges to understanding the root cause of problems associated with infusion pump use is the lack of infusion pump standards. There is little agreement on what should be standardized, let alone what the standards should be. Many stakeholders believe consensus on infusion pump standards would lead to immediate, short-term benefits and safer infusion pump use. As a starting point, standards should include drug name, recommended minimum and maximum dosages, upper and lower administration rate limits, standardized concentrations, and dosing units. The standards should also address special cases, including certain patient populations and clinical conditions, medication administration techniques (i.e., IV push), and monitoring requirements (ASHP, 2008).
Despite the relative acceptance of these measures, there are still many challenges to overcome. For example, the culture of fear propagated by liability concerns has made it increasingly less likely that practitioners will share their drug libraries. Nothing short of a cultural shift will help overcome the secretive nature of this information, which would allow drug libraries to be more accurate and more comprehensive.
Prescribing and Ordering Current prescription and ordering practices do not dictate how IV medications are ordered. Practices differ amongst practitioners, geographical regions, and clinical settings. As a result, different medications often end up looking very similar, leading to confusion and increasing the likelihood of administering errors. This situation could be improved by decision support systems that reinforce best practices at the point-of-care, which smart infusion pumps are in a unique position to provide.
Medication Storage States often control professional regulations, including storage practices, which can lead to very different practices from one clinical setting to the next. Drug compounding, for example, often takes place in different locations, making it difficult to store “commercially available ready-to-administer infusions” in consistent locations, such as patient care areas (ASHP, 2008, p. 2370). Unfortunately, storage problems are not something that can be solved by the design of the infusion pump itself. Only standardized storage practices will lead to quicker response times and improved patient outcomes.
Medication Preparation Many IV medications come in forms that need to be manipulated by a person before being administered. Leaving the admixing to any practitioner, however, leaves the door open to human error. In addition, there is no recognized format for labeling admixtures (ASHP, 2008). These are both key causes for concern. Providing medications in ready-to-administer form would greatly reduce IV drug administering errors, and standardized labels with machine-readable bar codes would enable smart infusion pumps to verify the correct medication is being delivered to the right person.
Administering Administering the drug is one of the most difficult steps of the process because the hospital environment challenges the user’s ability to make appropriate decisions. In addition, many users will find workarounds for systems that unintentionally increase time delivering medication (Yang, Ng, Kankanhallia, & Yip, 2011). The perceived need for speed consistently outweighs safe practices in clinical settings (Koppel, Wetterneck, Telles, Karsh, 2008). It is not surprising, then, to discover that users override approximately 90 percent of all infusion pump alarms (Brady, 2010). Smart infusion pumps can and should be designed to promote safe administering practices while limiting the ability of users to practice unsafe workarounds.
Monitoring There is no national standard operating procedure for documenting and/or responding to suspected medication errors, leading to the pervasive inability to identify the root causes of problems. This may be the result of the punitive culture of the healthcare environment itself. As the ASHP pointed out, there is a general “fear of blame and punishment for reporting errors or raising safety concerns” (2008, p. 2373). Unfortunately, this has led to a surprising lack of data that could lead to a better understanding of smart infusion pump usage. One benefit of utilizing smart infusion pumps is the ability to capture and analyze usage data, which would help identify bad practices.
Drug libraries will never be complete and nor should they be. As a result, smart infusion pump systems cannot exist without regular library updates. Effective library updates and maintenance can be achieved through a committed interdisciplinary staff, including stakeholders that may not have been traditionally involved, such as information technology (ISMP, 2009). The updates should be closely monitored by regulated oversight.
Clearly, design alone is not to blame for the problems facing smart infusion pumps. New medicine and patient care options will continue to expand and smart infusion pumps need to keep pace with the advances in medicine. The problems that have been outlined, which are certainly not exhaustive, exist throughout the process and the environment of use. The future of safe smart infusion pump use depends wholly on improved practices throughout the process, including standardization of healthcare practices, an improved culture of trust and safety, and a collaborative effort that leads to consensus on safe practices.
*This article was originally featured in Consultant's Corner, a Qmed publication, on March 12, 2012.
American Society of Health-System Pharmacists (ASHP). (2008, July). Proceedings of a summit on preventing patient harm and death from i.v. medication errors. American Journal of Health System Pharmacy, 65, 2367-2379.
Brady, J.L. (2010). First, do no harm: making infusion pumps safer. Biomedical Instrumentation & Technology. 44(5), 372-380.
Koppel, R., Wetterneck, T., Telles, J.L., Karsh, B. (2008). Workarounds to barcode medication administration systems: Their occurrences, causes and threats to patient safety. Journal of the American Medical Informatics Association, 15(4), 408-423. DOI: 10.1197
Institute for Safe Medication Practices (ISMP). (2009). Proceedings from the ISMP summit on the use of smart infusion pumps: guidelines for safe implementation and use. Retrieved from: http://www.ismp.org/tools/guidelines/smartpumps/comments/printerVersion.pdf
U.S. Food and Drug Administration, Center for Devices and Radiological Health. (2010). White Paper: Infusion Pump Improvement Initiative. Retrieved from U.S. Food and Drug Administration website: http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/GeneralHospitalDevicesandSupplies/InfusionPumps/ucm205424.htm
Yang, Z., Ng, B., Kankanhallia, A., & Yip, J.W. (2011). Workarounds in the use of IS in healthcare: A case studyof an electronic medication administration system. International Journal of Human-Computer Studies, 70(2012), 43–65. DOI: 10.1016/j.ijhcs.2011.08.002
The FDA’s position on validation testing requires that testing be conducted in actual or simulated environments. This focus has put the spotlight on the benefits of medical simulation centers. These centers have the ability to recreate sophisticated hospital environments and enable complex medical tasks to be performed in true-to-life scenarios, allowing manufacturers to determine whether a device is safe and effective before exercising it in a real clinical setting such as a clinical trial. Farm uses these facilities when appropriate to conduct summative testing for our clients.
Typically established as training centers for medical professionals, medical simulation centers are resource-rich environments that can be transformed into many types of hospital settings, such as operating rooms with a scrub-in area, labor and delivery suites, or acute care settings. We’ve found simulation centers typically have access to most of the necessary supplies and equipment used in an actual hospital setting, such as surgical gloves and dressings, crash or drug carts, syringes, IV poles, infusion pumps, ultrasound machines, and patient monitors—you name it. Specialized equipment, as well as the device being tested, can also be brought in.
Most medical simulation centers give researchers the ability to mock up the conditions of device use. For example, one can alter the height of the patient bed; enable alarms, overhead pages, and other medical device sounds; and vary lighting to match typical use conditions. The staff members who manage the simulation centers often have medical backgrounds and can provide their expertise and assistance in setting up realistic scenarios.
Simulation centers vary in their level of fidelity (or realism). When planning a usability validation test, researchers should consider the level of fidelity required based on the type of device, the environment in which it is used, and the level of impact environmental conditions could have on user interactions with the device. For example, it may be important to test a device designed for use in an emergency room setting in a high-fidelity simulated environment (such as the Center for Medical Simulation in Cambridge MA) in order to consider how frequent distractions influence the use of the device. According to ANSI/AAMI’s HE75:2009 guidance document, “ER staff, in particular, are regularly interrupted because of the unpredictable nature of their work environment…In one study, ER physicians were three times more likely to be interrupted than their primary-care peers working in medical offices and spent more time managing multiple patients simultaneously than primary-care physicians.”i High-fidelity labs may have more capabilities to recreate ER-like auditory distractions that typically occur, such as alarms, fans, and other sounds from the presence of multiple medical devices, staff and patient conversations, intercom pages, physical commotion, etc. In contrast, low-fidelity labs may be sufficient in cases where the physical and/or clinical environment is much less complex and much more controlled.
Since the primary focus of medical simulation centers is to educate medical professionals, many are equipped with high-fidelity video cameras and audio equipment. Students are recorded performing various procedures and the video is used as a teaching mechanism to improve their skills. Faculty can also use these videos to evaluate the abilities and techniques of students. For usability validation testing purposes, the audio and video capabilities allow stakeholders to observe sessions remotely or from a separate room and provide a record of each test session that may be reviewed later during data analysis.
In some cases staff members or actors (cohorts) play the role of patients, medical professionals, or relatives. These actors can be tasked with introducing realistic interruptions, adding stress or pressure to the medical scenario at hand. For example, they may act highly emotional, ask difficult questions, or interfere with a physician during an important step or procedure.
Mannequins are often used to stand in for patients during a usability test. High-end mannequins such as METI man and Blue Phantom trainers offer a plethora of capabilities, including adjustable internal bleeding levels, tissue that matches the real acoustic characteristics of human tissue and can be used with ultrasound, sensors that can detect the depth of nasal or oral intubation tubes, chest rise and lung sounds that can be synchronized with different breathing patterns, and pulse strength and blood pressure that can vary depending on ECG readings. We’ve seen adult female mannequins that can simulate childbirth and neonates that can produce various types of cries. The high-fidelity anatomy of these simulators, along with the ability to make them “speak” to medical professionals (a function operated by staff members from a control room), translates into an experience that very closely mimics genuine hospital scenarios.
In this ABC News video students at New York’s Simulation Center receive life-like lessons from high-tech mannequins:
As prescribed in the international standard IEC 62366:2007, “Usability validation may be performed in a laboratory setting, in a simulated use environment, or in the actual use environment.”ii Given all of their capabilities, medical simulation centers are worth considering for medical device validation tests. Be warned, however, that it isn't always the right solution for medical device usability testing. Their usefulness is a matter of applicability. Researchers must also consider whether patient behavior would significantly affect the outcome of the test, because if so, it may be necessary to evaluate the device in an actual clinical setting using real human patients.
Personally, I’m amazed at what one can accomplish in the simulation centers and eager to see what the next wave of technology innovation may bring!
i Association for Advancement of Medical Instrumentation, ANSI/AAMI HE75:2009
ii IEC/ISO International Standard 62366, Edition 1.0, 2007-10
Group ideation sessions can provide an effective platform for creating novel and innovative ideas. With so much material and so many ideation methods available, however, one of the biggest challenges lies in selecting the most appropriate ideation method.
Two factors are critical when selecting an ideation method: one, correctly identifying the type of problem to be solved, and two, deciding on an appropriate degree of transcendence. See Figure 1 for a visual organization of the selection process.
Identifying the problem: The first step in selecting an ideation method is to understand the type of problem you are solving. For example, if the technology is already developed and your task is to design a more efficient process, you might consider starting with a method that has been proven to be effective for workflow problems. Identifying the right problem can be as challenging as developing a solution, so be sure you have a thorough understanding of what it is you are trying to solve before wasting valuable resources.
Degree of transcendence: Early in the development process, it helps to explore far reaching ideas, but this may not be the case in later phases of development. It’s important to know where you are in the development process, so that you can decide on an appropriate degree of transcendence. Transcendence is defined as the degree by which you deviate from existing ideas or solutions. There are a number of reasons transcendence might be inhibited in group ideation sessions, including cognitive challenges such as social anxiety. Fortunately, some ideation methods are better suited to tradition, while others are more geared towards transcendence. It is important to decide how far you want to push the ideas so that you set appropriate expectations and enable individuals to focus on the right problem.
The two criteria outlined above will not alone ensure successful ideation sessions. In addition, there are key attributes that must be considered before conducting any group ideation session.
Resources available: People are the most valuable resource in an ideation session, so it’s important to ensure you have the right people for the job. Most successful sessions involve an interdisciplinary team of individuals, including people with domain knowledge about the problem.
Degree of structure: Some ideations methods provide more guidelines and/or processes. Research has shown that individuals new to group ideation perform better using more structured methods. Inspiration card workshops, for example, outline three steps to developing ideas, including a period of divergent thinking and a period of convergent thinking.
Sources of inspiration: There are countless ways to introduce inspiration to an ideation session, many of which are described within the specific ideation methodologies. Sources of inspiration can be physical, literary, metaphorical, technology based, or purely imaginative. Sources of inspiration can greatly influence the direction of the session, so give thoughtful consideration to the inspiration you provide.
Applied Imagination author Alex T. Osborn’s original four rules still apply: (1) go for quantity, (2) encourage unexpected ideas, (3) defer judgment, and (4) combine and improve ideas. The initial goal is divergence—to create a lot of ideas. You should evaluate ideas, using specific criteria, later in the process.
Provide breaks: Research has shown that brief breaks during an ideation session can lead to increased productivity throughout the session. Breaks allow participants to make novel connections or consider new ideas without actively considering the problem.
Create and enforce rules: It almost sounds counterintuitive, but studies have found that providing rules enhances productivity. The rules can be as simple as: (1) stay focused on the problem, (2) do not tell stories, and (3) do not criticize.
Getting stuck: It’s inevitable that at some point in the session the group will run out of ideas and/or energy to explain the ideas. Consider using quick, informal methods, such as Provocative Operation or Oblique Strategies, to reignite creative thinking.
Positive motivation and incentive: When team members are held accountable for delivering good ideas they make a deliberate effort to better understand the problem and contribute to the overall success of the team.
Organizations can increase the likelihood of conducting successful ideation sessions by sharing experiences in an editable database. By documenting the elements of each session (including the process, people involved, and sources of inspiration used), organizations can develop a company-wide knowledge base highlighting successful ideation experiences. Finally, increased productivity during ideation sessions is not enough to ensure innovation. Ideation sessions must be combined with suitable decision making and down-selection tools to ensure creative ideas are appropriately implemented.
*This blog post was originally featured on Medical Device Summit's MEDesign blog.
Conducting User Research in the OR
Medical device manufacturers understand the need to conduct in-depth user research in the field (design ethnography) as part of the product requirements process, but little information exists on what to expect when conducting user research in the OR. Although every hospital is different, here is some practical advice from Farm’s experience…
Sustainable Product Design: One Powerful Principal
An individual perspective on a research study titled "Sustainability & Innovation Global Executive Study and Research Project." The bottom line: Companies who embrace sustainability are winning. A focus on one powerful, underlying principle that any company should consider…
Selecting Cities for User Research
How do you select cities for user research? You should select cities based on the type of research—is it generative field research to identify user needs or areas for innovation, or formative testing of prototypes in order to get design feedback? Select cities based on the type of research you’re doing…
System Design for Auditory Perception
Auditory fatigue, or auditory desensitization, occurs in many working environments where auditory perceptual needs go unmet. Poor auditory environments challenge our ability to understand a situation, make appropriate decisions, and respond in a timely manner. Fortunately, these challenges can and should be addressed through appropriate auditory system design…
Ideation Throughout Medical Product Development
A wide range of techniques have been developed to help product development teams produce novel ideas effectively and efficiently. Unfortunately, few design professionals are aware of these methods, and even fewer understand the elements of creativity to help make ideation sessions more productive…