FountainBlue's November 13 VIP roundtable was on the topic of 'Data is the New Black'. Please join me in thanking our participating executives and our hosts at Micron for framing the discussion.
It's a given that each executive from each industry is impacted by data, so all were challenged to provide a provocative perspective on how data is and will be impacting our businesses, our industries, our daily lives. Below is a summary of my discussion.
The executives agreed that the data volume, velocity and variety is constantly growing. The challenge and the opportunity is to collaboratively delivering VALUE, while focusing on the needs of our customers’ needs.
Data is plentiful and growing, but finding the relevant and true data quickly will make the difference. Indeed, solutions which lead to valid conclusions, information, and solutions with limited data will become increasingly important. This is, in fact how the human brain functions… The trick is finding the most relevant kernels of true data.
One idea was to generate 'fast data' using AI and ML algorithms to make quick decisions based on specific criteria - like whether someone with little credit history is at high risk for defaulting on a loan.
Others brought up the importance of examining edge cases - double-clicking not on what's regular and normal, but finding patterns and learnings from the exceptional and different scenarios. From these anomalies, you may at times make a broader statement or conclusion - or just identify the anomaly as a one-off, a fluke.
What's important is to focus on the validity, the relevance, and the immediacy of the data, for if you're not working with this type of data, you'll have Garbage In, which would lead to Garbage Out (commonly known as GI-GO). The executives were in violent agreement that collaborating with others to validate data sources would lead to better decisions, better applications, better solutions.
Several executives mentioned the need to focus on HOW data can solve many problems, and the importance of leveraging data to solve the most relevant problems, and the need to customize solutions, for there is no one-size-fits-all scenario.
Additional best practices include the aggregation of data from across departments, organizations, product lines and organizations will bring larger, broader perspectives which are relevant for all involved, and the need for ongoing collaboration to ensure elegant and ongoing data integrity, especially when milliseconds of time make a huge difference. An example is validating the signature between a nurse and a patient so that prescriptions are more quickly distributed, when timing is critical.
Everyone agrees that the data generated and gathered can help organizations run more efficiently, can help customers better understand what's working, can help predict and capitalize on new strategies, can help deliver more relevant and richer customer experiences.
With the rising dependence on the generation, storage and management of data, and especially now as the data is integrated into everyone's day-to-day lives, we're finding huge volumes of data generated on the edge. Responding quickly to the data on the edge will be a challenge, and filtering out the most relevant data will make the solution more effective and more efficient.
We ended with a call for collaboration, for standardization, for policy upgrades around data generation, privacy and security. The semiconductor solutions which gave Silicon Valley its name continues to develop the chips that power servers, machines, tools and devices. Innovations in the semiconductor industry in turn can support the data collection, storage, distribution, etc., tasks normally attributed to software solutions.
Our executives concluded that Data can't be the new oil, for oil is organic and more predictable. Data is defined, created, standardized, used by humans, and humans have decided that data will be everywhere. So humans must decide how to use it well, to solve specific problems. The need is urgent, the need is immediate, the opportunities are vast.