Home Blog Page 759

Three Ways Your Analytics Platform Is Sabotaging Your Analytics (and how to fix it) Part II

0

This is the second part of the article Three Ways Your Analytics Platform Is Sabotaging Your Analytics (and how to fix it). The first part of this article can be found here.

         3. Don’t Get Used; Use Your Measures

Measures. They’re in your statutory reporting requirements; they’re in your incentive program; they’re in your internal quality-improvement program. But if your analytics platform is locked into only certain standards, you lack the ability to expand, to ask new questions, and to test new ideas. If measures cannot be retrieved or calculated when you need them, then you’re working for the platform, and not the other way around. And if you can’t adjust the granularity or specificity of the measure – allowing your teams to focus on specific sets of facilities, providers, or patients – you just lost a major tool in coordinating your quality-improvement program.

And once they’re there, do you trust your measures? Trust goes beyond simply believing the numbers; rather, are they reliable and traceable? Do you know what standards are the bases for your measures, which data are being used to calculate the measures, and what granularity, filters, and caveats are being applied to the specific results? The proliferation of local measures, in addition to the national standards like HEDIS and MU, means that any one organization may have hundreds or thousands of measures. Is it clear what the measure is doing so you know if it’s still serving its purpose?

To get real power out of the reporting framework in your analytics platform, your measures must be available, useful, and transparent. You need to be able to get at the measures when you need them and in a form that’s useful to you, whether that’s a spreadsheet, a feed to a reporting system, or the tablets in the hands of your care coordinators. Your measures need to be useful to you, not just in satisfying others’ reporting requirements, but for your own needs, from the reports that fuel the morning huddles to the data that drive your quality-improvement program. And don’t forget the transparency. You should be able to find out where the data used for measure calculation came from, and even how the measure functions. It helps to have a readable measure syntax, but clear documentation explaining not just what the measure is but where it came from gives you considerable power over and confidence in your reporting process.

From our perspective as data aggregators and integrators, we see it as our job to ensure success through effective communication and productive tools. However, there are a number of ways for organizations to transform how they use their analytics, although the particulars – and likelihood of success – will greatly depend on the organization.

  • Data integration depends on many factors (ask any organization that has gone through such a transformation), but our experience suggests that the top three promoters for success are: a) Leadership, the presence of a champion or champions in the C-suite who have the vision and clarity of what integration will bring to their organization and willingness to promote the organizational changes and investment needed to bring integration to fruition. b) Training throughout the entire organization, to ensure that users understand the value and impact of integration and also to make sure that workflow reflects the assumptions and definitions of the integration process. c) Quality and Validation, defined both by implicit data quality as well as by user requirements such that end users, administrators, and executives will have confidence in outcomes.
  • Connectivity is dependent both on a transport mechanism and the datasets to be connected. The transport mechanism most obviously includes the specifications for the transport protocol, but also relies on the definition of the source and destination dataset schemas. When you look at the definition of your transport protocol, is it capable of properly reflecting the source data in a reliable way? Is there extensive and complete coverage of your use cases? Are the structure rules and constraints on your inbound and outbound feeds sufficiently exact for your analytic needs? This can be an extremely demanding process for any organization, especially those with multiple legacy systems and diverging contracts or incentive programs. Organizations with complex sets of feeds and connections may want to bring in additional assistance in this area; at the least, an outside resource without specific affiliations to particular vendors will help in providing an unbiased perspective on how to improve connectivity.
  • To ensure that their measures are available, useful, and transparent, organizations should focus on meeting at least the initial requirements of the points above – integration and connectivity. Organizations can start this process by identifying crucial weak links in system-system data flow, such as between clinical data sources and appointment-tracking or billing systems. Equally crucial are identifying weak links in human-interface data flow, such as how often data must be ported out of one system, manually manipulated, and transported by one or more individuals, then manually loaded or presented to another system for further analysis or distribution. (A good metric is how many Excel spreadsheets your organization generates, modifies, and consumes on a monthly basis.) By identifying these links and aligning them with the most actionable and impactful measures – for example, those that are most frequently and closely missed – an organization can begin its data transformation with an eye toward immediate improvement in measure performance and use, along with a plan for the long-term improvements in measure and reporting productivity discussed in the article.

Finally, these are some questions I would like to raise, and I am looking forward for your contributions.

  • How many crucial reports are transferred between systems, departments, and entities using spreadsheets? How many different people are responsible for manually gathering data for, assembling, and then distributing these reports? How much information is gathered from or captured into systems that require person-by-person access, such as an EHR? And how closely is the quality of this process monitored and understood?
  • When was the last time you said, “If only we knew…?” about some facet of your business? What was it you wanted to know? What could you have done if only you had been able to answer that question?
  • How much of your budget do you allocate to improving and integrating your data assets? How much of your revenue stream, profits, and general business workflow depend on these assets? How far apart are these numbers?
  • Are the IT tools used throughout your organization able to talk to one another? If so, when was the last time you checked to make sure that they are all receiving all the information they need to come to correct conclusions? If not, are there cases where some systems might become much more valuable if they could “know” what the other systems “know”?
  • How often do you find that you are “waiting” for results? For example, how long do you have to wait to find out how a provider group performed on some requirement on a given day? Can you know by the next morning, or does it take several days, weeks, or months? What difference would it make to your productivity and business efficacy if you could know the next day?

 

Michael Simon, Ph.D., is a principal data scientist at Arcadia Healthcare Solutions, with experience in data analytics, process efficiency, and experimental design. Since joining Arcadia in 2011, Michael has focused on demonstrating and enhancing the value of client data through exploratory data analysis and advanced statistical methodologies, and on the analysis of healthcare data quality. Michael is a former National Science Foundation AAAS policy fellow and earned his doctorate in biology from Tufts University in Medford, Mass. He also holds a Bachelor of Arts degree in economics and a Bachelor of Science in electrical engineering from Rice University in Houston, Texas.

Three Ways Your Analytics Platform Is Sabotaging Your Analytics (and how to fix it) Part I

0

Michael A. Simon, a principal data scientist at Arcadia Healthcare Solutions, argues that everyone needs an analytics solution for different reasons. The issue is that your analytics solution may be holding you back – such that it will leave you without some of the most important features you’re looking for. In this post, Michael describes the three crucial properties to embrace if you are thinking about introducing a new analytics platform into your health IT environment, or you’re wondering whether your current system is really bringing you all the power you thought it would.

Everyone needs an analytics solution today!

Your reasons may vary. It may be the federal mandates, or the proliferation of incentives (and penalties) dependent on a growing number of distinct measure sets, or the increasing complexity of managing an aging population with chronic diseases. Or maybe you just don’t want to be left out of the Big Data Boom.

No matter the specific reason, it is prudent to seek out a powerful and effective analytics solution. The market agrees, as thousands of companies pour in to what will be a $20 billion industry by 2020 (IQ4I Research & Consultancy).

The catch is that your analytics solution may be holding you back. Despite all the potential and big promises, many analytics systems will leave you without some of the most important features you’re looking for – the ability to understand your population’s health state, to identify points of risk and mounting utilization, or even the ability to simply have confidence in the results.

That’s why, if you’re thinking about introducing a new analytics platform into your health IT environment, or you’re wondering whether your current system is really bringing you all the power you thought it would, you should consider whether it embraces these three crucial properties:

  1. Don’t Separate; Integrate Your Data

If your system only offers you access to claims data, you get a view of the world as it was a month or two ago. Or three. Or more. Claims data are easy – structured, clean and orderly –  but claims data also lack many crucial elements like patient vitals, important details from encounter notes, and activities that don’t typically get a notation in a claim. Wondering whether the patient smokes, and whether he or she is being counseled to stop? Claims won’t tell you.

On the other hand, an EHR will tell you those things, and much more. It is a real-time, clinically rich and hugely powerful dataset.  Unfortunately, if you are only using EHR data, you are forced to operate solely within the boundaries of that EHR. Absent is the vast domain of events that occurred outside that realm, like urgent care or ED visits. How many of our patients actually went to a specialist for that eye exam? And how many are currently taking anti-psychotics but haven’t been seen in months? The EHR doesn’t know.

But a well-integrated system does. An integrated analytics platform brings together both clinical and claims data to create a more up-to-date view of the world, both within the ambulatory center and out into a world of disconnected inpatient and specialist facilities. An integrated system offers greater completeness. It combines knowledge of an unexpected urgent care visit with the patient’s vitals for the months and years preceding it and offers considerable intelligence at every level of the healthcare world, whether it’s a care team discussing an overall population health strategy or a physician preparing to see a longtime patient after an unexpected event. And the person is at the center: Any integrated analytics platform needs a powerful Master Person Index, so there’s never any doubt of whom you’re dealing with.

  1. Connect Completely and Confidently

If data are the fuel that fires up your analytic platform, what’s your pipeline? All of the fancy analytics, performance measurement and predictive scores go to waste if they’re not fed a high-quality, clinically rich dataset. Does your platform use a condensed extract like the Continuity of Care Document (CCD)? Specifications like CCD offer a compact and convenient way to exchange information between defined roles for explicit purposes, but they also impose severe limitations. Depending on vendor or configuration, the CCD could exclude many of the data elements your analytics system needs in order to answer your questions, resulting in your taking a severe hit in the data-quality department.

Speaking of data quality, does your analytics platform ensure the accuracy of the data it takes in? Of course, it may never be able to tell you that the blood pressure entered should have been 130/70 and not 130/60. But what if it’s 40/70, or 12.6/70, or Yes/70? And what about making sure structured information are in a useful structure? If there are 71 different definitions of “smoker” in the system, analysis could get a little hampered.

What you need is a data pipeline, or connector, that is both complete and confidence-inspiring. A connector should have access to at least a Minimally Viable Dataset that supports basic reporting functions, population-level health analysis, and patient-level care management. Your connector should be transparent in how it handles the data-scrubbing process, and, like any good relationship, should be able to provide feedback on what your data look like and when there are deviations from your typical, “normal” mode of operations. And don’t forget the mappings: Your data should be mapped to appropriate standards (LOINC, CPT, NDC/RxNorm) once they pass through the connector, but there should also be knowledge about local customs and workflows and the ability to map those to understood clinical concepts so that any user can sit down at your analytic platform and spend time learning about people and health and not about how specific items are coded.

Michael Simon, Ph.D., is a principal data scientist at Arcadia Healthcare Solutions, with experience in data analytics, process efficiency, and experimental design. Since joining Arcadia in 2011, Michael has focused on demonstrating and enhancing the value of client data through exploratory data analysis and advanced statistical methodologies, and on the analysis of healthcare data quality. Michael is a former National Science Foundation AAAS policy fellow and earned his doctorate in biology from Tufts University in Medford, Mass. He also holds a Bachelor of Arts degree in economics and a Bachelor of Science in electrical engineering from Rice University in Houston, Texas.

Part II of this article can be found here.

The Differences between User Stories and Software Requirements Specification (SRS) – Interview with Per Lundholm

This is the fourth post in the series in which I have interviewed several Agile experts to reveal the differences between user stories and software requirements and their application to regulated systems (i.e., health IT systems). You can find the previous post in this series here.

Today’s interviewee is Per Lundholm, a professional developer since 1980. He has been a member of Crisp since 2007. Per helps clients become agile both as coders and as coaches. He also enjoys learning new things, and he blogs both textually and in video.

Do you think that “user story” is just a fancy name for SRS?

No. It is a completely different thing.

How do you compare a user story with SRS?

A story is a hook for discussion and planning, while a specification leaves no room for discussion.

Do you think that user stories replace SRS?

In a sense they do, since requirement specifications as an idea is dead. Regulations still exist, of course, but that is a different thing.

Which of the two do you prefer working with?

I wouldn’t call them methods, but I see what you mean. User stories are by far the most productive way.

Which of the two methods do you recommend using for regulated systems (i.e., health IT systems, medical device software)?

User stories, and speaking from experience. Regulations are a fact of life like many other things. You just need to adhere to them, or your stories will not be accepted as done. The definition of “done” becomes more complicated, and you may need to have more steps in the life cycle of a user story. User stories put focus on what brings value to the user and things that we can affect, putting things that we can’t affect in the background.

Do you agree with Per? Comments and discussion are welcome.

The Differences between User Stories and Software Requirements Specification (SRS) – Interview with Ron Jeffries

As mentioned in our post The Difference between User Stories and Software Requirements Specification (SRS), we decided to interview some Agile experts on this topic in order to reveal some confusion that may be occurring in the industry. This will be a series of interviews, with one interview presented in each blog post.

The interviewee that we present now is Ron Jeffries, one of the 17 authors and signatories of the Agile Manifesto. He was the onsite XP coach for the original Extreme Programming project in 1996. He is the proprietor of http://ronjeffries.com and senior author of Extreme Programming Installed – the second XP book after Beck’s white book – and the author of Extreme Programming Adventures in C#.

Ron is a frequent speaker and trainer at Agile software-development conferences, including attending and speaking at all of conferences in the United States and some in Europe. He is a well-known independent consultant in XP and Agile methods.

Thus, I saw that Ron would be a great fit for the topic we are investigating. Without further ado, let us go ahead and see what Ron Jeffries had to tell us:

Do you think that “user story” is just a fancy name for SRS?

No, they are quite different. User stories are small descriptions of single features, usually the smaller the better. SRS has no standard definition that I’m aware of, but generally refers to the set of all requirements, not to a single one. Even a single requirement spec is usually much much larger than a story.

How do you compare a user story with SRS?

I wouldn’t. I don’t see the need to do so. You may see a need, of course.

Do you think that user stories replace SRS?

Certainly, products large and small can be and have been built without doing anything I’d call SRS. So the answer is clearly yes – user stories can replace SRS.

Which of the two do you prefer working with?

Stories. They are small, simple, and easy to use. Because they are focused more on conversation than on writing, they are far more collaborative. According to my explanation of stories, they explicitly include acceptance criteria and are also concrete enough for any purpose. Search for “Card, Conversation, Confirmation” for more on that thought.

Which of the two methods do you recommend using for regulated systems (i.e., health IT systems, medical device software)?

I would use stories for everything at the team level. There might be some higher-level SRS that is parsed out into stories. There is some good work being done with regulated systems and every reason to believe that a lot of the mechanism that big organizations have built up could be trimmed down a lot. This might be seen as more of a job for Lean than for Agile.

Do you agree with Ron Jeffries? Comments and discussion are welcome.

The next post in this series where I interviewed Johanna Rothman can be found here.

Electronic Health Records Can Kill!

0

Technology is playing a vital role in our lives. It is reaching into every field – even the critical ones, the ones on which our lives depend. Take healthcare for instance. By June 2014, funding for digital healthcare technology companies reached $2.3 billion, exceeding the entire total of 2013.

At the top of the list of these technologies are the EHRs (electronic health records), in which patient information (medical history, medications) is recorded.

Since the Obama administration began encouraging providers to adopt EHRs, usage of such systems has increased dramatically. At the end of 2013, 50 percent of doctor’s offices and 80 percent of eligible hospitals had EHRs.

Fatal mistakes

There are some EHR issues that can hurt, or even kill. Jenny was killed by what was considered a bad UX (user experience) design, which I believe occurred in an EHR system in which three experienced nurses (more than 10 years of experience) missed a very critical piece of information, causing Jenny to miss her hydration for two shifts. This happened because the nurses were trying to figure out the software. Distracted, they overlooked that critical piece of information and care.

The hard truth is that UX issues in EHRs seem to be underevaluated. Ask yourself why you still see old systems in hospitals? Is it lack of new technology? The issue here is that when healthcare organizations decided to acquire EHR systems – priced in the millions – and those systems had poor usability, such organizations have been stuck! In other words, healthcare organizations will be burdened with poor usability for the life of those systems. And poor usability can mean compromising of patient safety.

Need more proof? Take a look at Scot Silverstein’s 84-year-old mother’s incident, caused by EHRs.

EHRs are prone to error

EHRs are being increasingly adopted. Doctors are now able to access all of the patient’s medical information online. This implies that the quality of healthcare should improve, right? I’m afraid that recent evidence shows that EHRs may prompt safety concerns when some network outage occurs. The fact that EHR problems are often complex and not easy to prevent led the Pennsylvania Patient Safety Authority to call attention to how EHRs can impact safety.

This study, The Role of the Electronic Health Record in Patient Safety Events, was performed to inform the field about the various types of EHR-related errors and, at the same time, serve as a basis for further study. Explored were 3,099 EHR technology-related reports. The majority involved errors in human data entry, such as entry of wrong data or the failure to enter data, and a few reports indicated technical failures on the part of the system itself,

Toward safer EHRs

Many issues that may arise from EHRs and affect patient safety are mentioned in this report. If we focus on data-entry issues, since those represent the majority of EHR-related errors, the following actions can be taken in order to avoid such errors and increase patient safety:

  • For EHR certification, usability concerns have to be taken into account – how the system can be used without confusion.
  • It should be stated clearly what the vendor’s EHR systems is able – and not able – to do.
  • The healthcare organization should implement policies and strategies for appropriate EHR use.
  • The healthcare organization should ensure that all users of the EHR system receive thorough training.
  • An internal report system that identifies issues related to EHR usage must be implemented.
  • Critical areas in the system that lead to risk factors must be well-identified, and access to such areas should be restricted.

An organization using an EHR system has to raise awareness of how critical such systems can be to the patient’s life, and have a team dedicated to training the staff on addressing any issues that may occur.

In conclusion

If used properly, EHRs can enhance patient welfare, since his or her information is easily accessible. However, if EHR safety concerns are not seriously taken into account, catastrophes can occur.




The Differences between User Stories and Software Requirements Specification (SRS) – Interview with Christiaan Verwijs

This is the third post in the series in which I have interviewed several Agile experts to reveal the differences between user stories and software requirements and their application to regulated systems (i.e., health IT systems). You can find the previous post of this series here.

Today’s interviewee is Christiaan Verwijs. Christiaan is the founder of Agilistic, a company that specializes in helping small businesses and startups on their journey toward Agile software development. He has been working with Scrum for more than three years, for a variety of companies and with a diverse group of teams. He enjoys writing about his Agile adventures in his personal blog: http://www.christiaanverwijs.nl

Do you think that “user story” is just a fancy name for SRS?

The concept of a user story must be understood within the context of Agile software development. For me, a user story is mostly a vehicle to facilitate discussion and collaboration within the team. Some liken user stories to “unrealized product options,” and I like that idea. A user story is the simplest way to describe such options from the user’s point of view, “simplest” in the sense that it provides the team with just enough information to get started, test assumptions, and delivers something of value at the end of the sprint that can be reviewed. In a sense, you could describe a user story as a requirement. But it’s mostly just a “product option” that is either more or less desirable to the product owner.

How do you compare a user story with SRS?

User stories are a form of specification, but a very limited one – by design. Within the context of Agile software development, we accept that projects are subject to frequent change and mistakes (in estimation, in assumptions, etc.). Instead of spending a lot of time on writing thorough specifications, we prefer to work with the least-extensive specifications to get us started. The problem with SRS is that they are mostly a bunch of assumptions about what users want, how functionality should work, and how to technically implement it. Until you’ve built the actual software and shown it to actual users, it’s all just speculation. And if your assumptions turn out to be wrong (as they often are), you have wasted a lot of time on specification. This makes it an expensive failure. Instead, I prefer to work with a method where I can fail early and quickly. Mistakes are less costly this way. And the best way to do this is by frequently delivering and reviewing (real) working software. So instead of focusing on specifications, we focus on building working software.

This does not mean that you can’t extend user stories with more information, like acceptance criteria. For me, this is usually just a short checklist of what a user story must do. Like, “It must work in IE8,” or “The email uses the styling of [Brand X].” They can help to address nonfunctional requirements. Although I’m perfectly happy with backlogs that are not 100 percent made up of user stories. Nowhere in the Scrum Guide does it say that user stories are required or even needed. I prefer to write the acceptance criteria during sprint planning or when preparing the next sprint – just-in-time specification, in a sense. But very minimal, as I want to focus on writing working software instead of documents.

Do you think that user stories replace “SRS”?

User stories serve a different need. They are not supposed to replace SRS. They are supposed to be part of a different way to build software. Instead of a planned approach, it suits an approach based on collaborative discovery.

Which of the two do you prefer working with?

User stories. But not because of the stories themselves, but the different process they are a part of. I’m currently working on a large project with a team of six developers. We’ve completed about eight sprints now and have seen a major shift in what we’re building and how we’re building it. When we started the project, we had a lot of ideas and quite a detailed backlog (with user stories, but still). After only a few months, we’ve dropped a lot of supposed core features and replaced them with very different solutions that we discovered as we progressed. I really enjoy this approach because it fits with the discovery process and the unexpected stuff that happens in all projects. If we would’ve had to write software requirements, we would’ve spent months on that. And we would’ve still run into the same unexpected and unpredicted issues. It was a lot less painful to drop a few lightly detailed user stories than a bunch of specification documents.

Which of the two methods do you recommend using for regulated systems (i.e., health IT systems, medical device software)?

As for regulated systems, you may need more specifications on the software that you’ve written. But I’m not sure if you really need more specifications before you start building software. After all, why would the problems that non-health-IT projects are facing – constant change, different interpretations, difficulty with estimation – not affect these kinds of projects? This software is equally vulnerable to incorrect assumptions. Maybe even more so. What you will need is to very carefully describe how the software works and what happens under what conditions. I think that this kind of specification can be in the form of documentation, but I would actually prefer batteries of exhaustive automated tests – unit, regression, etc. This way, your “documentation” stays up-to-date as tests are adjusted when bugs are fixed, new features are implemented, etc.

Do you agree with Christiaan? Comments and discussion are welcome.

You can find the next post in this series, where I interviewed Per Lundholm, here.

The Differences between User Stories and Software Requirements Specification (SRS) – Interview with Johanna Rothman

This is the second post in the series in which I have interviewed several Agile experts in order to reveal the differences between user stories and software requirements and their application to regulated systems (i.e., health IT systems). You can find the previous post of this series here.

Today’s interviewee is Johanna Rothman, known as the “Pragmatic Manager.” Johanna provides frank advice for your tough problems. She helps leaders see problems, seize opportunities, and remove impediments. Johanna is working on a book about Agile program management. Johanna writes columns for StickyMinds and ProjectManagement.com, writes two blogs on her jrothman.com website, and blogs on createadaptablelife.com.

Do you think that “user story” is just a fancy name for SRS?

No. User stories are often very different from an SRS.

A user story is a single scenario. An SRS is often a collection of requirements. You might have user stories in an SRS, but why would you? You rank each story in a backlog, and as you proceed, you get feedback on the story as you complete it.

In an SRS, you attempt to create all the requirements, often as functional/non-functional requirements.

The problem is that customers want features, stories. They don’t want functional or non-functional requirements. You need to explain these requirements in stories.

How do you compare a user story” with SRS?

An SRS is often written in the passive voice (like I just did). It’s often too long. It’s difficult to see the value, or the ranking of the requirements. If people rank the requirements, they often use buckets, as in “must,” “could,” “would,” or something like that. When we write user stories, we talk about a specific user, so we write the story in an active voice. We see the value. We can compare the value of one story against the other.

Do you think that user stories replace SRS?

For me, they do. Why wouldn’t they?

As you complete a story, you get feedback. With that feedback, you can decide which story to do next. You can change.

What I see most often is that product managers/product owners don’t require everything they think they do in an SRS if they use user stories and rank them.

Which of the two do you prefer working with?

I prefer user stories, even if I’m not using Agile. That way, I always see the value and the actor in the requirements. I can easily rank the requirements.

Which of the two methods do you recommend using for regulated systems (i.e., health IT systems, medical device software)?

User stories. They explain who the requirement is for, and the story around each requirement, in addition to acceptance criteria.

Do you agree with Johanna? Comments and discussion are welcome.

You can see the next post in this series, where I interviewed Christiaan Verwijs, here.

The Differences between User Stories and Software Requirements Specification (SRS) – Interview with Dr. Alistair Cockburn

This is the fifth post in the series in which I have interviewed several Agile experts to reveal the differences between user stories and software requirements and their application in regulated systems (i.e. health IT systems). You can find the previous post in this series here.

Today’s interview is with Dr. Alistair Cockburn, one of the original creators of the Agile Software Development movement and co-author of “The Agile Manifesto.” He was voted as one of the “All-Time Top 150 i-Technology Heroes” for his pioneering work in the software field. Besides being an internationally renowned IT strategist and author of the Jolt award-winning books “Agile Software Development” and “Writing Effective Use Cases,” he is an expert on agile development, use cases, process design, project management, and object-oriented design.

In 2003 he created the Agile Development Conference; in 2005 he co-founded the Agile Project Leadership Network; and in 2010 he co-founded the International Consortium for Agile. Many of his articles, talks, poems, and blogs are online at http://alistair.cockburn.us.

Do you think that “user story” is just a fancy name for SRS?

No, but close.

How do you compare a user story with SRS?

A story must be end-user visible, something that an end user declares is valuable to him/her, and implementable in one sprint. SRS does not have those characteristics.

Do you think that user stories replace SRS?

Yes.

Which of the two do you prefer working with?

Neither. Use cases + user stories works well.

Which of the two methods do you recommend using for regulated systems (i.e., health IT systems, medical device software)?

I do not recommend user stories for regulated systems. I do not recommend SRS ever.

Do you agree with Dr. Cockburn? Comments and discussion are welcome.

You can find the post in which I interviewed Jean Pierre Berchez here.

The differences between User Stories and Software Requirement Specifications (SRS) – Interview with Jean Pierre Berchez

This is the sixth post in the series in which I have asked several agile experts to discuss the differences between user stories and software requirements and their application in regulated systems (i.e., health IT systems). You can find the previous post in this series here.

Today’s interview is with Jean Pierre Berchez. J.P. had his first contact with what we now call Scrum in 1995. He was working at Easel at the same time Jeff Sutherland, Ph.D, invented the forerunner of Scrum. He has more than 20 years’ experience in software development as a project manager, trainer, and coach. He has worked with global companies like Cincom, TogetherSoft, MKS, PTC, and Sun Microsystems, as well as the United States Department of Agriculture.

Besides his work in industry and government, he is a lecturer at the universities of Liechtenstein, Heidenheim, and Stuttgart. J.P. is the organizer of Scrum-Day, which is probably the largest Scrum conference in the German-speaking market, perhaps in all of Europe: http://www.scrum-day.de/.  You can read more about Jean Pierre Berchez at http://www.scrum-events.de/.

Do you think that “user story” is just a fancy name for SRS?

Absolutely not! User stories are an easy-to-understand way to describe functional requirements (but I’m not saying they are easy to write).

How do you compare a user story with SRS?

User stories are more focused on the main problem, which is a functional requirement to be solved. They force you to focus. The user story format is a clear one. Nevertheless, user stories might be enhanced by some additional information. And they definitely need acceptance criteria!

Do you think that user stories replace SRS?

Not in every case, but in some for sure.

Which of the two do you prefer working with?

User stories.

Which of the two methods do you recommend using for regulated systems (i.e, health IT systems, medical-device software)?

Difficult to answer, but we have customers building their medical devices by writing user stories. And I also saw them in the automotive sector.

Do you agree with Jean Pierre? Comments and discussion are welcome.

5 Things I Learned from the MidAmerica Healthcare Venture Forum 2015 (for Startups)

After thinking through the presentations at the MidAmerica Healthcare Venture Forum (MHVF) and the discussions I had while there, I wanted to talk about how I could take that information and make it actionable for startup firms and their founders.  To do that, I want to dive a little deeper on just a few topics and tie it all back to some great industry feedback that has just been provided.

The startup process: How are you planning to get launched and does your plan align to what the investors require?  Might as well plan ahead as you’ll be calling them for their money sooner than later:

  1. Business plan development. Like I said in a previous post, money is telling you to start with the problem, not the solution. So, in building your business plan, you should be able to point to the problem, the costs associated with it today that you will alleviate, the revenue you will generate for solving the problem, and how your solution is different from others trying to solve the same problem.
  2. Create your team. Make sure you have the personnel that investors want to put time and money into. One investor told me the product is secondary and the team is his major deciding factor. Get experience, build a solid advisory team, and align skill sets to your business – no Uncle Rico’s guys because they work cheap!
  3. Focus product development on human-centered design. Usability, workflow, and task-oriented solutions are going to differentiate in the future.  Even if you have a service offering, it needs to fit into the workflow of the users of that service. If you ask users to change too much, you will create resistance and pushback that you may not be able to overcome, even with a solid ROI.
  4. Build your MVP. Don’t build your solution, just build the basic solution that you can test and validate within the market. The further you go down the road with development, the more money you may have to throw away if it isn’t received well.
  5. Get clients. Pilot sites, partners, centers of excellence – call them what you will, but get users banging on the product so you can introduce the feedback and experience as proof points to investors. Otherwise, you are calling for investment into an idea; every product is just an idea until you have some users.

Finally, do you have access to independent angel investors? We all do. Go online and use www.angel.co, and you can find them. DO NOT start calling venture capital firms; they are going to be the second group that you contact after raising your seed round. Calling them too early wastes your time and theirs, and you will lose credibility with them. Make sure you don’t raise too much, and be sure you don’t raise too little. It’s the Red Riding Hood issue – it’s got to be just right!  This is where good planning and strategy help a ton!

In the next post, I will share the 5 lessons I learned from MHVF (for Fitting into the Marketplace). Stay tuned!