Friday, August 30, 2019

Artificial Intelligence makes life better


Grandpa Jiang, a 73-year-old man suffering from lumbar diseases, found “a best friend and assistant” in a new senior care center in the Pudong New Area.
It’s a robot called UFU that looks like a normal wheelchair initially. But it can be adjusted to a standing position and moved with a control stick and is specifically designed for elderly people and those recovering from surgery. 
“I can walk and move like normal people with the best friend UFU. Therefore my daughter needn’t come to look after me every day,” said Jiang, who hasn't been able to stand or move by himself for more than a decade.

Thursday, August 29, 2019

Artificial Intelligence Revolution: This is why Universities are Reexaminig curriculums


Artificial Intelligence is slowly taking over given its ability to enhance efficiency in various operations. Increased automation, thanks to the machine learning technology, is gradually becoming a reality. Amidst the transformation, there have been concerns that the technology could render many people jobless. However, that appears not to be the case.
The synergies up for grabs with Artificial Intelligence are one of the reasons why institutions of higher learning are increasingly taking note of the technology. Programs designed to equip students with relevant skills pertaining to the machine learning technology are slowly cropping up, having become clear the kind of impact the technology is poised to have in the labor market.

Thanks to AI, 40-50% of the current jobs should be viable over the next 15 years, even with higher levels of automation. The technology is poised to have a significant impact on the way people, as well as machines, carry out operations in the labor market. This is in part because of an increase in computing power that is making it possible for AI-powered machines to carry out functions normally designated to humans. While AI will displace old jobs, it will also give rise to new jobs.
The availability of vast troves of data has made it possible to train machines leveraging artificial intelligence to achieve specific goals as well as tasks. Advancement in AI technology means there is a likelihood that AI-powered machines could in the future exceed human intelligence. There is also the probability of an emergence of a new form for AI that is well beyond human intelligence.

Wednesday, August 28, 2019

Shaping the Future Of Technology Governance: Artificial Intelligence and Machine Learning


Artificial Intelligence (AI) is the software engine that drives the Fourth Industrial Revolution. Its effect can be seen in homes, businesses and political processes. In its embodied form as robots, it will soon be driving cars, stocking warehouses and caring for the young and elderly. AI holds the promise of solving some of society’s most pressing issues but also presents challenges such as inscrutable “black box” algorithms, unethical use of data and potential job displacement.


As rapid advances in machine learning (ML) increase the scope and scale of AI’s deployment across all aspects of daily life, and as the technology can learn and change on its own, multistakeholder collaboration is required to optimize accountability, transparency, privacy and impartiality to create trust.
Our Platform brings together key stakeholders from the public and private sectors to co-design and test policy frameworks that accelerate the benefits and mitigate the risks of AI and ML.

Tuesday, August 27, 2019

Microsoft announces skilling initiative in artificial intelligence for government officials in charge of IT


In line with the government’s Digital India vision, Microsoft India launched the Digital Governance Tech Tour, a National program to deliver critical AI and intelligent cloud computing skills to the Government officials in charge of IT across the country. The initiative comprises a series of physical and virtual workshops and aims to train 5,000 personnel over a period of 12 months. This announcement reaffirms Microsoft’s commitment to empowering government organizations to leverage AI and secure cloud technology for efficient, transparent and productive governance.

AI and intelligent technologies are becoming all-pervasive today, driving change across businesses, communities and governments. As India advances towards fulfilling its vision of becoming a $5 trillion economy, applying AI and data analytics using secure and compliant cloud-based tools can provide actionable, predictive and effective citizen-focused services while empowering more secure inter-departmental and cross-agency collaboration. Through this program, Microsoft will help upskill government officials, equipping them with the digital skills and experience needed today and – in the future – to successfully deploy cloud-based solutions.

Monday, August 26, 2019

How China Is Revolutionising Education Using Artificial Intelligence


In China, huge strides are already being made when it comes to integrating education students with artificial intelligence.
In July 2017, China’s highest governmental body, the State Council, introduced the Next Generation Artificial Intelligence Development Plan (NGAIDP). It was aimed at connecting AI with most parts of life in China, including healthcare, transportation, government, and education. With a plan to becoming a world leader in AI by 2030, the NGAIDP roadmap, released by the Communist Party’s powerful State Council emphasized increased education in AI at primary and middle schools. 


If we talk about education, the Chinese have been working on creating intelligent education. The Chinese government’s ambitious plan would require huge amounts of research in AI, supported by professionals trained in the technology. The Chinese government has set 2030 as the deadline to integrate AI with the Chinese infrastructure. In this regard, huge strides are already being made when it comes to educating the populace using AI. This way China is working to not only making its young population familiarise with the technology, but it is also revolutionizing how education is being imparted. According to an estimate, China led the way in over $1 billion invested globally last year in AI education

Start-ups Are Spearheading AI In Chinese Education Industry

The integration of AI with the education sector in China is also growing fast, particularly after the government incentivized the use of AI through tax breaks. Chinese tech startups have ramped up AI projects under the government’s support and secured funding from investors. Many of those projects have been launched in the country’s schools for creating intelligent education systems. 

Friday, August 23, 2019

Artificial Intelligence to Drive Future Weapons Development


Artificial Intelligence (AI) is rapidly permeating the defense industry to aid and improve human decision-making. Over the past few years several new products and technologies have come into play which indicates that this technology is on an upward trajectory.
AI has a clear edge in areas such as super-fast decision-making required in repetitive tasks and combining multiple data inputs from various sensors to throw up options for decision-making and even making some of these decisions by itself.


In March 2019, the UN held a meeting to discuss a ban on autonomous weapons called by 25 nations which were opposed by the US, South Korea, Russia, Israel, China, and Australia. These countries have made substantial investments in unmanned, autonomous systems.

Thursday, August 22, 2019

Semiconductor industry leads in artificial intelligence adoption: Accenture


The semiconductor industry is the most bullish about adopting artificial intelligence (AI) and understanding the significant impact it will have on their industry, according to Accenture Semiconductor Technology Vision 2019, the annual report from Accenture that predicts key technology trends likely to redefine business over the next three years.
Three-quarters of semiconductor executives surveyed for the report (77%) said they have adopted AI within their business or are piloting the technology. In addition, nearly two-thirds of semiconductor executives (63%) expect that AI will have the greatest impact on their business over the next three years, compared with just 41% of executives across 20 industries. These ranks AI higher for chipmakers than other new disruptive technologies surveyed, including distributed ledgers, extended reality, and quantum computing.
AI, comprising technologies that range from machine learning to natural language processing, enables machines to sense, comprehend, act and learn in order to extend human capabilities. According to the report, AI will have a two-fold impact on chipmakers: opening new market opportunities for them and improving the design and the fabrication process.


AI will be a major growth driver for the semiconductor industry in light of high manufacturing costs and the growing complexity of chip development,” said Syed Alam, a managing director at Accenture who leads its Semiconductor practice globally. “To capture this opportunity, chipmakers should leverage AI technologies and partnerships to increase efficiency across their operations.”
A 5G Revolution
Nearly nine in 10 semiconductor executives (88%) say that 5G, the next generation of wireless technology, will revolutionize their industry by offering new ways to provide products and services. This revolutionary impact is being driven by the high demand for 5G-enabled smartphones, growth in autonomous vehicle manufacturing, and the rise in government initiatives for building smart cities.
The report also cites challenges that 5G network implementations pose for the semiconductor industry, including the high costs for technology and infrastructure advancements and the concerns around privacy and security.

Workforce reskilling

The report finds that companies must support a new way of working for their employees. Three in five semiconductor executives (37%) expect to move more than 40% of their workforce into new roles in the next three years, which will require substantial reskilling.
“Technology advancements such as AI, 5G and IoT will force semiconductor companies to fundamentally reimagine the skilling of their workforces,” said Dave Sovie, senior managing director, and global High Tech industry lead. “To do that, they will need to empower and skill their workforce to conceive, make, distribute and support the next generation of products in the marketplace.”

Wednesday, August 21, 2019

Intel launches first artificial intelligence chip Springhill


Its first# AI product comes after it had invested more than $120 million in three AI start-ups in Israel. Intel Corp on Tuesday launched its latest processor, its first using #artificial intelligence (AI), designed for large computing centers.
The chip, developed at its development facility in Haifa, Israel, is known as Nervana NNP-I or Springhill and is based on a 10 nanometre Ice Lake processor that will allow it to cope with high workloads using minimal amounts of energy, Intel said.


Intel said its first #AI product comes after it had invested more than $120 million in three #AI startups in Israel.
“In order to reach a future situation of ‘AI everywhere’, we have to deal with huge amounts of data generated and make sure organizations are equipped with what they need to make effective use of the data and process them where they are collected,” said Naveen Rao, general manager of Intel’s artificial intelligence products group  
“These computers need acceleration for complex AI applications.” It said the new hardware chip will help Intel Xeon processors in large companies as the need for complicated computations in the AI field increases.

Tuesday, August 20, 2019

How Artificial Intelligence and Machine Learning Shape Customer Journey


Customer experience professionals have been obsessed with mapping customer journeys — optimizing business processes and streamlining the passage of engagement.
Leveraging the power of artificial intelligence (AI) and machine learning, using real-time insights, — and proactively engaging at the right the moment through the best channel for prospects, customer and the business — drives outstanding business results.
Carl Jones, Predictive Engagement Lead ANZ at Genesys said in an online interaction that there are multiple points where #AI and #machine learning can play a positive role in customer experience.
Jones gave the example of searching “low rate credit card” which lands on financial services site from a Google Ad.
“It’s obviously important that the site personalizes the landing page and content offers to reflect the customer’s intent — there’s not much point in showing insurance offers if they are looking for a credit card.
“But, it’s also really important that the customer is proactively assisted to apply for the right card for them. This proactive approach isn’t simply a matter of popping a chat window after 20 seconds and hoping for the best, but recognizing that the customer has issues or questions or is struggling and interacting with them in the best way possible.”
For example, in the UK, Smyths Toy Superstores reduced its shopping cart abandonment rate by 30 percent and increased high-value sales by three percent by engaging customers at the right time.
Jones said this is where AI and machine learning really assist, as its simply not possible or cost-effective for humans to watch all the traffic on a web site, decide what the prospect is trying to achieve, and interact with the most valuable prospects via the most effective method.
AI can decide how to interact with a customer — for instance a chatbot or a human and in fact which a human agent would be the most effective to interact with the customer based on agent previous success.
“The overall  outcome of this multi-touchpoint AI approach is that more prospects reach the point, sometimes with assistance, of completing the purchase or application process”.

Monday, August 19, 2019

Pentagon Underinvesting in Artificial Intelligence


In recent years, defense officials have been banging the drum about the importance of adopting artificial intelligence to assist with everything from operating autonomous platforms to intelligence analysis of logistics and back-office functions. But the Pentagon is not pumping enough money into this technology, according to one expert.
“The critical question is whether the United States will be at the forefront of these developments or lag behind, reacting to advances in this space by competitors such as China,” Susanna Blume, director of the defense program at the Center for a New American Security, said in a recent report titled, “Strategy to Ask: Analysis of the 2020 Defense Budget Request.”

The request includes just $927 million for the Pentagon’s AI efforts, about 0.13 percent of the department’s proposed $718 billion toplines, she noted.
“Given the enormous implications of artificial intelligence for the future of warfare, it should be a far higher priority for DOD in the technology development space, and certainly a higher priority than the current No. 1 — development of hypersonic weapons,” she said. “While DOD is making progress in AI … it is, quite simply, still not moving fast enough.”
The Pentagon is hoping to leverage advances in the commercial sector, which is investing far greater amounts of money into AI. It has a number of initiatives aimed at building bridges with companies in tech hubs such as Silicon Valley, Boston, and Austin, Texas. However, not everyone in those places is on board with assisting the military, Blume noted.
“While DOD labs and agencies continue to do good and important work in this space, the primary AI innovators are tech companies such as Google,” she said. “Unfortunately, engaging with these companies have sometimes proved challenging for DOD.”
As an example, Blume noted that Google pulled out of Project Maven — which utilizes artificial intelligence to analyze drone footage — after protests from employees who didn’t want their work to be used for warfighting purposes.
On the brighter side, the Pentagon is investing more in unmanned platforms that could use AI, Blume said. The department requested $3.7 billion for autonomous systems in 2020. Plans include acquiring a variety of unmanned aircraft, ships, and undersea vehicles.
“These autonomous systems all have the potential to alleviate many of the services’ readiness and manning woes, while generating additional capacity and capability,” she said.
“They also, create opportunities for innovative operational concepts that can help the U.S. military maintain and extend a position of dominance against its most challenging competitors.”

Friday, August 16, 2019

Artificial intelligence can contribute to a safer world


We all see the headlines nearly every day. A drone disrupting the airspace in one of the world’s busiest airports, putting aircraft at risk (and inconveniencing hundreds of thousands of passengers) or attacks on critical infrastructure. Or shooting in a place of worship, a school, a courthouse. Whether primitive (gunpowder) or cutting-edge (unmanned aerial vehicles) in the wrong hands, technology can empower bad actors and put our society at risk, creating a sense of helplessness and frustration.



Current approaches to protecting our public venues are not up to the task, and, frankly appear to meet Einstein’s definition of insanity: “doing the same thing over and over and expecting a different result.” It is time to look past traditional #defense_technologies and see if newer approaches can tilt the pendulum back in the defender’s favor. Artificial Intelligence (AI) can play a critical role here, helping to identify, classify and promulgate counteractions on potential threats faster than any security personnel.
Using technology to prevent violence, specifically by searching for concealed weapons has a long history. Alexander Graham Bell invented the first metal detector in 1881 in an unsuccessful attempt to locate the fatal slug as President James Garfield lay dying of an assassin’s bullet. The first commercial metal detectors were developed in the 1960s. Most of us are familiar with their use in airports, courthouses and other public venues to screen for guns, knives, and bombs.
Fortunately, new AI technologies are enabling major advances in physical security capabilities. These new systems not only deploy advanced sensors to screen for guns, knives, and bombs, they get smarter with each screen, creating an increasingly large database of known and emerging threats while segmenting off alarms for common, non-threatening objects (keys, change, iPads, etc.)

Wednesday, August 14, 2019

Artificial intelligence helps detect atrial fibrillation


Researchers in the US have developed a rapid, artificial intelligence (AI)-based test that can identify patients with abnormal heart rhythm, even when it appears normal. This 10-second test for atrial fibrillation could be a significant improvement over current test procedures that can take weeks or even years.

Atrial fibrillation is a common cardiac condition that is estimated to affect between three and six million people in the US alone. The condition is associated with an increased risk of stroke, heart failure, and mortality – but it is underdiagnosed. This is because it can be asymptomatic and the patient’s heart can go in and out of the arrhythmia, making diagnosis tricky. It is sometimes caught on an electrocardiograph (ECG), but often detection requires the use of implantable or wearable monitors to capture infrequent atrial fibrillation episodes over time.
“Atrial fibrillation is an arrhythmia where the atrium, or top chamber of the heart, loses it's coordinated contractual activity and instead quivers because the electrical impulses are changed in the way it course through the atrium,” explains Peter Nose worthy of the Mayo Clinic. “So, the top chamber beats irregularly and it causes the bottom chamber, the ventricle, usually to beat fast and irregularly, which can be bothersome, but most importantly it predisposes people to risk of stroke.”

Tuesday, August 13, 2019

Can artificial intelligence beat a human hacker?

Please type the words you see in the image. At some point, we have all completed a captcha to prove we are human when online. So, when a robot successfully completed the test, we were left asking, are our computers secure? Here Jonathan Wilkins, marketing director at obsolete parts supplier EU Automation, explains how #machine_learning and #artificial_intelligence (AI) impacts cyber-security. A captcha, or Completely Automated Public Turing test to tell Computers and Humans Apart are designed based on the Turing test. Alan Turing, the founder of modern computing, built a machine that was capable of mimicking human speech in letters so that outsiders could not distinguish between human and #robotic conversations. This machine inspired the field of #artificial_intelligence, bringing with it security tests to distinguish between humans and machines. Technology is advancing rapidly, meaning that computers can now solve problems that could only be solved with human intuition traditionally. But what does a robot beating a captcha has to do with #cyber_security in manufacturing facilities?

Digitalization

As manufacturing becomes more digitalized, connected machines collect real-time data that is vital in keeping facilities running at optimum capacity. As more machines become connected thanks to the #Internet_of_Things (IoT), they also become more vulnerable to viruses that can be introduced to the system.
Hacking
The growing use of AI in the industry means that manufacturers must do more to secure information. However, manufacturers can look to similar #AI technology for help. If it can hack a system by pretending to be human, could it successfully block a similar threat from a human hacker?
Industrial viruses are traditionally introduced from an external source, such as a USB or incoming data file. Both machines and humans will find it difficult to predict how this threat will impact IT and manufacturing systems. However, humans have the upper hand from computers as they can use past experience and knowledge to deal with any system abnormalities.
#Robots do not have the same intuition, but advancements in machine learning allow computers to make decisions based on collected data. Each time the machine experiences something new its capabilities will increase.

Security

Some professionals argue that traditional security protocols are reactive and only deal with attacks when they occur. In the past, human hackers have easily broken through barriers such as passwords and firewalls. Now, #cyber_security companies are offering solutions to this using #AI and #machine_learning technology to introduce more preventative security for manufacturers.
Security Company, Dark trace, uses #machine_learning to create unique patterns of encryption for each machine and detect any abnormalities. The software can then detect emerging threats that may have gone unnoticed and stop them before the damage occurs.
#Artificial_intelligence is developing rapidly and changing cybersecurity considerations in manufacturing. It is unclear how much #AI will be capable of in the future, but we need to rethink how we distinguish between humans and robots online.

Monday, August 12, 2019

Artificial intelligence, quantum computing and the laws of encryption


The last decade has seen several science and technology breakthroughs. From self-driving cars to 3D-printing, clean energy technologies to artificial intelligence assistants, progress has been swift. While some technologies take decades to become useful, others disrupt quickly. In 2019, two major technologies have been making headlines but aren’t being taken very seriously – #artificial_intelligence (AI) and quantum computing (QC). These technologies would change the nature of cyber-attacks. Artificial intelligence can be used to not only probe but also to specifically tailor attacks against organizations and other targets. We’ve already seen some instances of #AI used to copy the voice and mannerisms of a person to create something that looks and sounds as though the real person said it called “deep fakes”.
Quantum computing took off in the early 90s and is now emerging as the next generation of computing. Operations that take hours and days, will happen in seconds with quantum power. With that #technology, the scaling of computations goes up dramatically, to the point where the time needed for breaking traditional encryption would shrink to weeks, or maybe even minutes. This means breaking some of the foundational encryption we see in use today. The estimates for when QC will really take off range anywhere from 5 to 20 years. One thing we do know, however, is that QC has the potential to completely transform the cyber threat landscape.



That said, quantum computing poses risks to some cryptography algorithms. For instance, public-key cryptographic algorithms, which are based on the discrete logarithm problem, elliptic curve logarithm problem, and integer factorization problem (RSA encryption) are susceptible to brute-force attacks using Shor’s algorithm. Whoever develops the quantum computers first would be able to break legacy encryption protecting historical information. Parallel to the development of quantum computing has been that of “post-quantum” or “quantum-resistant” cryptography to create encryption mechanisms that are resistant to quantum computing decryption capabilities. It remains to be seen whether these will achieve widespread adoption prior to quantum computers’ ability to trivialize existing encryption schemes.
While threat actors may use quantum computing to defeat some encryption algorithms, we expect that the adoption of quantum key distribution (QKD) will increase the secrecy of communication networks. The nature of quantum key generation and distribution guarantees communication systems’ security because the observation of a quantum-generated key will necessarily degrade or otherwise alter the key in a detectable fashion. As a result, we predict this will severely inhibit traffic interception schemes, as recipients would be able to identify messages that have been viewed prior to their receipt.
As of today, quantum computers exist, and developers can access them through the cloud. However, current quantum computers have some limitations, including the instability of quantum computing environments, which makes their practical use more difficult. Researchers are currently working to mitigate these inhibitions. It should be noted that quantum computing is still primarily in the research and development phases; large-scale application production and rollout has not occurred yet. Companies and countries are spending millions of dollars to win the race to get there first. U.S. quantum computing development has achieved good performance in terms of the raw number of qubits (72-qubit processor); however, China currently has the record on experimentally demonstrating an 18-qubit entanglement that is the basis of quantum computation and quantum communication. China may be behind in raw quantum computing hardware, but they are making good headway on finding applications for quantum computing once it becomes a reality. While quantum computing is still years away from becoming a conventional technology, it is a tight arms race.

Wednesday, August 7, 2019

Twilio: Harnessing The Power of AI (Artificial Intelligence)


On the earnings call, CEO Jeff Lawson noted: “We have the opportunity to change communications and customer engagement for decades to come.”
And yes, as should be no surprise, one of the drivers will be #AI (Artificial Intelligence).  Just look at the company’s Autopilot offering (at today’s Signal conference, Twilio announced that the product is generally available to customers).  This is a system that allows for the development, training, and deployment of intelligent bots, IVRs and Alexa apps.
Now it’s true that there is plenty of hype with #AI. Let’s face it, many companies are just using it as marketing spiel to gin up interest and excitement.
Yet Autopilot is the real deal. “The advantage that’s unique to Twilio’ s #API platform model is that we build these tools in response to seeing hot spots of demand and real need from our customers,” said Nico Acosta, who is the Director of Product & Engineering for Twilio’ s Autopilot & #Machine _Learning Platform. “We have over 160 thousand customers of every size across a huge breadth of industries and we talk to them about the solutions they need to improve communication with their customers. What do they keep building over and over? What do they actively not want to build because it’s too heavy a lift? Those conversations inform the products we create that ultimately help them differentiate themselves through better customer experience.”



AI Innovation
Consider that Autopilot breaks the conventional wisdom that there is an inherent trade-off between operational efficiency and customer experience.  To do this, Twilio has been focusing on pushing innovation with #AI, such as with:
·         Classification: This involves grouping utterances and mapping them to the correct task. With #AI, the system gets smarter and smarter.
·         Entity Extraction: This uses #NLP (Natural Language Processing) to locate details like time, place, cities, and phone numbers and so on. This means it is easier to automate repetitive tasks like setting up appointments (if the customer says “7 at night,” the NLP will understand this).

Tuesday, August 6, 2019

Building trust in Machine Learning and AI in digital lending


From fraud prevention to investment predictions and marketing, Machine Learning (ML) and Artificial Intelligence (AI) are recent cutting edge developments in the finance industry. Particularly in the digital lending space, the next step to truly integrate these technologies is to build consumer trust in them. 
The trust barrier facing machine intelligence 
In the recent global study by Pegasystems, it was shown that only 35 percent of survey respondents felt comfortable with AIMore specific to the finance industry is HSBC’s Trust in Technology study, which found that only 7 percent of respondents would trust an AI to open a bank account, and 11 percent would trust an AI to dispense mortgage advice. Notably, the dominant concern was that AI cannot understand our needs as well as another human being. There is a trust barrier, which is the challenge facing banks, traditional financial institutions and fin-techs such as digital lenders.

Overcoming the lack of empathy in AI 
Be it opening a bank account or having loan applications screened by AI, consumers are uncomfortable with having a machine “in charge”, despite the fact that an AI could possibly reduce human bias and personal preferences in granting loans and approving deals. First, digital lenders can overcome the lack of empathy in AI by educating their consumers about how their algorithm works and what the requirements are. For example, exploring ways to increase algorithmic accountability, including the possibility of having algorithms reviewed by a regulatory board.
Over communicating how AI is deployed to screen applicants is crucial.
For example, by taking personal bias out of the equation, there are fewer chances for people to take advantage of personal connections to get a loan approved. With AI, applicants would be assessed based on their qualifications alone. It is also important to be transparent and stringent in how funds are handled. For instance, P2P lending platforms like Validus do not keep investor funds in their own accounts. Funds are held in escrow until they are disbursed to borrowers. Digital lenders should emphasize the due diligence required for handling monies, especially given that they have less face-to-face interaction with customers.
Getting past the interpretability barrier in ML
The interpretability barrier is a long-standing issue in AI and ML. It refers to how machine thinking can yield accurate results, but lack the ability to explain them. This is a constant source of frustration to consumers, who are disinclined to trust what they cannot understand. To get past the interpretability barrier, it is important for digital lenders to simply explain what their AI cannot, especially when the AI processes many factors and data. For example, to generalize complexities, the Credit Bureau of Singapore (CBS) can use its credit scoring system to explain which factors contribute to bad credit, even if exact numbers on how much impact different factors hold, cannot be disclosed. 

Prove and acknowledge the criticality

Criticality refers to the degree of risk posed to the consumer should the AI make a mistake. The higher the degree of criticality, the more important it is to prove the AI’s accuracy. Digital lenders must acknowledge that their AI has a higher degree of criticality than most consumer services like AIs tasked with recommending the next Netflix movie. If a loan of US$300,000 is disbursed to a company that cannot repay it, the result is significant financial damage to a digital lender and its investors.  To prove and acknowledge criticality, digital lenders should constantly communicate the provability of its AI through benchmarking, repeated simulations and backtesting. For example, the digital lender’s default rate compared to the top three banks should be included in regular reporting to investors. 

Build confidence in ML and AI

Before ML and AI, financial institutions had to overcome significant distrust when algorithmic trading was first used by banks in the 80s and 90s. By constantly communicating proof of the accuracy, explaining the concepts of the algorithm’s decision making, and exercising corporate responsibility, financial institutions have successfully normalized funds that are purely run by algorithms.