Thursday, August 22, 2019

Semiconductor industry leads in artificial intelligence adoption: Accenture


The semiconductor industry is the most bullish about adopting artificial intelligence (AI) and understanding the significant impact it will have on their industry, according to Accenture Semiconductor Technology Vision 2019, the annual report from Accenture that predicts key technology trends likely to redefine business over the next three years.
Three-quarters of semiconductor executives surveyed for the report (77%) said they have adopted AI within their business or are piloting the technology. In addition, nearly two-thirds of semiconductor executives (63%) expect that AI will have the greatest impact on their business over the next three years, compared with just 41% of executives across 20 industries. These ranks AI higher for chipmakers than other new disruptive technologies surveyed, including distributed ledgers, extended reality, and quantum computing.
AI, comprising technologies that range from machine learning to natural language processing, enables machines to sense, comprehend, act and learn in order to extend human capabilities. According to the report, AI will have a two-fold impact on chipmakers: opening new market opportunities for them and improving the design and the fabrication process.


AI will be a major growth driver for the semiconductor industry in light of high manufacturing costs and the growing complexity of chip development,” said Syed Alam, a managing director at Accenture who leads its Semiconductor practice globally. “To capture this opportunity, chipmakers should leverage AI technologies and partnerships to increase efficiency across their operations.”
A 5G Revolution
Nearly nine in 10 semiconductor executives (88%) say that 5G, the next generation of wireless technology, will revolutionize their industry by offering new ways to provide products and services. This revolutionary impact is being driven by the high demand for 5G-enabled smartphones, growth in autonomous vehicle manufacturing, and the rise in government initiatives for building smart cities.
The report also cites challenges that 5G network implementations pose for the semiconductor industry, including the high costs for technology and infrastructure advancements and the concerns around privacy and security.

Workforce reskilling

The report finds that companies must support a new way of working for their employees. Three in five semiconductor executives (37%) expect to move more than 40% of their workforce into new roles in the next three years, which will require substantial reskilling.
“Technology advancements such as AI, 5G and IoT will force semiconductor companies to fundamentally reimagine the skilling of their workforces,” said Dave Sovie, senior managing director, and global High Tech industry lead. “To do that, they will need to empower and skill their workforce to conceive, make, distribute and support the next generation of products in the marketplace.”

Wednesday, August 21, 2019

Intel launches first artificial intelligence chip Springhill


Its first# AI product comes after it had invested more than $120 million in three AI start-ups in Israel. Intel Corp on Tuesday launched its latest processor, its first using #artificial intelligence (AI), designed for large computing centers.
The chip, developed at its development facility in Haifa, Israel, is known as Nervana NNP-I or Springhill and is based on a 10 nanometre Ice Lake processor that will allow it to cope with high workloads using minimal amounts of energy, Intel said.


Intel said its first #AI product comes after it had invested more than $120 million in three #AI startups in Israel.
“In order to reach a future situation of ‘AI everywhere’, we have to deal with huge amounts of data generated and make sure organizations are equipped with what they need to make effective use of the data and process them where they are collected,” said Naveen Rao, general manager of Intel’s artificial intelligence products group  
“These computers need acceleration for complex AI applications.” It said the new hardware chip will help Intel Xeon processors in large companies as the need for complicated computations in the AI field increases.

Tuesday, August 20, 2019

How Artificial Intelligence and Machine Learning Shape Customer Journey


Customer experience professionals have been obsessed with mapping customer journeys — optimizing business processes and streamlining the passage of engagement.
Leveraging the power of artificial intelligence (AI) and machine learning, using real-time insights, — and proactively engaging at the right the moment through the best channel for prospects, customer and the business — drives outstanding business results.
Carl Jones, Predictive Engagement Lead ANZ at Genesys said in an online interaction that there are multiple points where #AI and #machine learning can play a positive role in customer experience.
Jones gave the example of searching “low rate credit card” which lands on financial services site from a Google Ad.
“It’s obviously important that the site personalizes the landing page and content offers to reflect the customer’s intent — there’s not much point in showing insurance offers if they are looking for a credit card.
“But, it’s also really important that the customer is proactively assisted to apply for the right card for them. This proactive approach isn’t simply a matter of popping a chat window after 20 seconds and hoping for the best, but recognizing that the customer has issues or questions or is struggling and interacting with them in the best way possible.”
For example, in the UK, Smyths Toy Superstores reduced its shopping cart abandonment rate by 30 percent and increased high-value sales by three percent by engaging customers at the right time.
Jones said this is where AI and machine learning really assist, as its simply not possible or cost-effective for humans to watch all the traffic on a web site, decide what the prospect is trying to achieve, and interact with the most valuable prospects via the most effective method.
AI can decide how to interact with a customer — for instance a chatbot or a human and in fact which a human agent would be the most effective to interact with the customer based on agent previous success.
“The overall  outcome of this multi-touchpoint AI approach is that more prospects reach the point, sometimes with assistance, of completing the purchase or application process”.

Monday, August 19, 2019

Pentagon Underinvesting in Artificial Intelligence


In recent years, defense officials have been banging the drum about the importance of adopting artificial intelligence to assist with everything from operating autonomous platforms to intelligence analysis of logistics and back-office functions. But the Pentagon is not pumping enough money into this technology, according to one expert.
“The critical question is whether the United States will be at the forefront of these developments or lag behind, reacting to advances in this space by competitors such as China,” Susanna Blume, director of the defense program at the Center for a New American Security, said in a recent report titled, “Strategy to Ask: Analysis of the 2020 Defense Budget Request.”

The request includes just $927 million for the Pentagon’s AI efforts, about 0.13 percent of the department’s proposed $718 billion toplines, she noted.
“Given the enormous implications of artificial intelligence for the future of warfare, it should be a far higher priority for DOD in the technology development space, and certainly a higher priority than the current No. 1 — development of hypersonic weapons,” she said. “While DOD is making progress in AI … it is, quite simply, still not moving fast enough.”
The Pentagon is hoping to leverage advances in the commercial sector, which is investing far greater amounts of money into AI. It has a number of initiatives aimed at building bridges with companies in tech hubs such as Silicon Valley, Boston, and Austin, Texas. However, not everyone in those places is on board with assisting the military, Blume noted.
“While DOD labs and agencies continue to do good and important work in this space, the primary AI innovators are tech companies such as Google,” she said. “Unfortunately, engaging with these companies have sometimes proved challenging for DOD.”
As an example, Blume noted that Google pulled out of Project Maven — which utilizes artificial intelligence to analyze drone footage — after protests from employees who didn’t want their work to be used for warfighting purposes.
On the brighter side, the Pentagon is investing more in unmanned platforms that could use AI, Blume said. The department requested $3.7 billion for autonomous systems in 2020. Plans include acquiring a variety of unmanned aircraft, ships, and undersea vehicles.
“These autonomous systems all have the potential to alleviate many of the services’ readiness and manning woes, while generating additional capacity and capability,” she said.
“They also, create opportunities for innovative operational concepts that can help the U.S. military maintain and extend a position of dominance against its most challenging competitors.”

Friday, August 16, 2019

Artificial intelligence can contribute to a safer world


We all see the headlines nearly every day. A drone disrupting the airspace in one of the world’s busiest airports, putting aircraft at risk (and inconveniencing hundreds of thousands of passengers) or attacks on critical infrastructure. Or shooting in a place of worship, a school, a courthouse. Whether primitive (gunpowder) or cutting-edge (unmanned aerial vehicles) in the wrong hands, technology can empower bad actors and put our society at risk, creating a sense of helplessness and frustration.



Current approaches to protecting our public venues are not up to the task, and, frankly appear to meet Einstein’s definition of insanity: “doing the same thing over and over and expecting a different result.” It is time to look past traditional #defense_technologies and see if newer approaches can tilt the pendulum back in the defender’s favor. Artificial Intelligence (AI) can play a critical role here, helping to identify, classify and promulgate counteractions on potential threats faster than any security personnel.
Using technology to prevent violence, specifically by searching for concealed weapons has a long history. Alexander Graham Bell invented the first metal detector in 1881 in an unsuccessful attempt to locate the fatal slug as President James Garfield lay dying of an assassin’s bullet. The first commercial metal detectors were developed in the 1960s. Most of us are familiar with their use in airports, courthouses and other public venues to screen for guns, knives, and bombs.
Fortunately, new AI technologies are enabling major advances in physical security capabilities. These new systems not only deploy advanced sensors to screen for guns, knives, and bombs, they get smarter with each screen, creating an increasingly large database of known and emerging threats while segmenting off alarms for common, non-threatening objects (keys, change, iPads, etc.)

Wednesday, August 14, 2019

Artificial intelligence helps detect atrial fibrillation


Researchers in the US have developed a rapid, artificial intelligence (AI)-based test that can identify patients with abnormal heart rhythm, even when it appears normal. This 10-second test for atrial fibrillation could be a significant improvement over current test procedures that can take weeks or even years.

Atrial fibrillation is a common cardiac condition that is estimated to affect between three and six million people in the US alone. The condition is associated with an increased risk of stroke, heart failure, and mortality – but it is underdiagnosed. This is because it can be asymptomatic and the patient’s heart can go in and out of the arrhythmia, making diagnosis tricky. It is sometimes caught on an electrocardiograph (ECG), but often detection requires the use of implantable or wearable monitors to capture infrequent atrial fibrillation episodes over time.
“Atrial fibrillation is an arrhythmia where the atrium, or top chamber of the heart, loses it's coordinated contractual activity and instead quivers because the electrical impulses are changed in the way it course through the atrium,” explains Peter Nose worthy of the Mayo Clinic. “So, the top chamber beats irregularly and it causes the bottom chamber, the ventricle, usually to beat fast and irregularly, which can be bothersome, but most importantly it predisposes people to risk of stroke.”

Tuesday, August 13, 2019

Can artificial intelligence beat a human hacker?

Please type the words you see in the image. At some point, we have all completed a captcha to prove we are human when online. So, when a robot successfully completed the test, we were left asking, are our computers secure? Here Jonathan Wilkins, marketing director at obsolete parts supplier EU Automation, explains how #machine_learning and #artificial_intelligence (AI) impacts cyber-security. A captcha, or Completely Automated Public Turing test to tell Computers and Humans Apart are designed based on the Turing test. Alan Turing, the founder of modern computing, built a machine that was capable of mimicking human speech in letters so that outsiders could not distinguish between human and #robotic conversations. This machine inspired the field of #artificial_intelligence, bringing with it security tests to distinguish between humans and machines. Technology is advancing rapidly, meaning that computers can now solve problems that could only be solved with human intuition traditionally. But what does a robot beating a captcha has to do with #cyber_security in manufacturing facilities?

Digitalization

As manufacturing becomes more digitalized, connected machines collect real-time data that is vital in keeping facilities running at optimum capacity. As more machines become connected thanks to the #Internet_of_Things (IoT), they also become more vulnerable to viruses that can be introduced to the system.
Hacking
The growing use of AI in the industry means that manufacturers must do more to secure information. However, manufacturers can look to similar #AI technology for help. If it can hack a system by pretending to be human, could it successfully block a similar threat from a human hacker?
Industrial viruses are traditionally introduced from an external source, such as a USB or incoming data file. Both machines and humans will find it difficult to predict how this threat will impact IT and manufacturing systems. However, humans have the upper hand from computers as they can use past experience and knowledge to deal with any system abnormalities.
#Robots do not have the same intuition, but advancements in machine learning allow computers to make decisions based on collected data. Each time the machine experiences something new its capabilities will increase.

Security

Some professionals argue that traditional security protocols are reactive and only deal with attacks when they occur. In the past, human hackers have easily broken through barriers such as passwords and firewalls. Now, #cyber_security companies are offering solutions to this using #AI and #machine_learning technology to introduce more preventative security for manufacturers.
Security Company, Dark trace, uses #machine_learning to create unique patterns of encryption for each machine and detect any abnormalities. The software can then detect emerging threats that may have gone unnoticed and stop them before the damage occurs.
#Artificial_intelligence is developing rapidly and changing cybersecurity considerations in manufacturing. It is unclear how much #AI will be capable of in the future, but we need to rethink how we distinguish between humans and robots online.