Interoperability in action:
Healthcare
Let’s use healthcare as an example of how interoperable machine learning
technology can enhance our lives. Consider high-tech medical procedures like CT
scans that automatically generate large volumes of sensor data for a single
patient as opposed to health information your doctor manually enters into a
proprietary database
during a routine check-up. Without a way to quickly and automatically integrate
these disparate data types for analysis, there is lost the potential for fast
diagnosis of critical illnesses. This has created a demand for optimization
across different information models. Current methods and legacy systems simply
aren’t friendly in terms of interoperability — but recent developments in machine learning are
opening the door for the possibility of stronger, faster translation between
information platforms. The result could be vastly improved medical
care and optimized research practices.
The role
of neural networks
Modeled after the human brain, neural networks are
comprised of a set of algorithms that are designed to recognize patterns. They
interpret sensory data through a sort of machine perception, labeling or
clustering raw input. The patterns they recognize are numerical, contained in
vectors, into which all real-world data, be it images, sound, text or time
series, must be translated. According to a 2017 article in MIT News, neural networks were
first proposed in 1944 by Warren McCullough and Walter Pitts, two University of
Chicago researchers who moved to MIT in 1952 as founding members of what’s
sometimes called the first cognitive science department. Since that time, the
approach has fallen in and out of favor, but today it’s making a serious
comeback.
No comments:
Post a Comment