The Core Informatics user group meeting, Launch2016, is just two short months away. The Launch2016 website already has an agenda packed with speakers. In order to further address the needs and expectations of our attendees, we have been taking a closer look at the issues facing our industry.
We found our new Director of European Sales, John Egerton, was more than willing to give us observations and predictions on the state of laboratory informatics. Over the course of his career, John has performed roles across IT and sales. He believes that every day there’s more to learn and that’s what keeps this job and this industry so interesting.
Core blog: What are the biggest issues facing the industry?
JE: I think there are two main issues. One is scalability. Definitely scalability.
Nearly every business out there that reaches a certain size requires IT solutions that scale with their business. Which is why many organizations are choosing to offload their internal informatics solutions in favor of cloud and SaaS-based offerings. Think Office 365, Salesforce.com, Skype, and other solutions which are served externally to the business, but still very much used on a daily basis.
Lab informatics is no different. If you’re an organization with labs in various locations around the globe, do you try and shoehorn your lab informatics platform into the same infrastructure that used to host your email (before you outsourced it), then worry about whether your lab in Hong Kong can access the data as readily as the lab in Philadelphia? Or do you simply host it securely in Amazon’s cloud like many of the other technologies you’re no doubt using?
Even if you’re not a globally-dispersed business, the risk/reward as well as the economics around self-hosting your own informatics platform still lend themselves heavily to the cloud.
I think the second biggest issue facing the industry is centered upon the representation of data and information. More accurately, how we do this now that the game has changed.
Representing Data and Information
Every day we amass a sometimes inconceivable amount of data. Then we use our informatics solutions in order to transform this data into information by providing context. Back in the day, this information was all we needed to allow us to draw insight and make decisions. After all, the whole point of information is to provide knowledge that a human being can utilize in order to make effective decisions at a certain point in time.
But what happens when the amount of information available to us increases exponentially? How do we understand what information is important and what is not? In this day and age, it would be impossible for a human being to even read through all of the information we now generate on a daily basis within our organizations. Enter the new modern buzzword: Big Data.
Traditional reporting solutions have only ever provided a snapshot of information at a certain point in time. They only tell you ‘what’ is happening as opposed to ‘why’ things are happening. Add this to the problems I’ve already mentioned regarding information overload and it becomes relatively easy to understand why many organizations are struggling with how to make ‘smart’ decisions.
Hard-hitting headlines – such as with the recent banking crisis – have woken people up to the reality that simply capturing masses of information is pretty much worthless without being able to gain better insights about why things are happening and what, as a result, may happen next.