Financial industry regulators warn AI must be monitored
11/15/2017 / By Rita Winters / Comments
Financial industry regulators warn AI must be monitored

The speed in which artificial intelligence is developing is harrowing, especially as experts cannot fully say whether AI and machine learning would benefit us in the long run. Machine-operated systems seem like a good start for companies and banks that rely on statistics and mathematical computations — since we tend to err in that department — but when it comes down to judging human intentions and basically understanding humans at all, you can never rely on a robot to do so.

The Financial Security Board (FSB) is an international group that monitors the global financial system. They also make suggestions based on their observations and calculations for the financial system’s improvement. The FSB has stated that rules and regulations for artificial intelligence should be created. There are currently no international laws or regulatory standards on the limits of these technologies and these might be a disaster just waiting to happen.

According to the FSB, if we replace our bank and insurance workers with machines, we risk creating a dependency on companies that provide these technologies. This dependency is considered a risk because there is no such thing as a perfect artificial intelligence so far. This may lead to unbalanced increases in credit scores that may not be sustained by the credit owner. If an AI-providing company fails for any reason, it will also pull down banks and other companies that invested in them. A market failure such as that would be disastrous in the global economy.

The FSB analyzed many AI in the financial industry. While the group saw many benefits to the system, they also noted many risks. As stated on their website, AI provides an efficient way of processing information, and can result in forms of interconnectedness between financial markets. The downsides however, are the consequences of network failures, the lack of personal responsibility and auditability of these machines, and assessment of what a machine learns.

A lot of insurance companies and banks have already invested in artificial intelligence. Most of them prefer using these machines over employing people because of their current programmability and their tendency to avoid errors. RegTech aims to tackle money laundering and make banks safer with the help of these AIs. Nordea, a Nordic region bank, aims to shed 4,000 of their human staff to replace with androids. Some of the more developed machines are even “taught” to sift through the news and research on market trends. International regulators themselves use AI to detect fraudulent accounts and laundered money.

While these robots, androids, and machines were created to make the world an easier place to live in, the unforeseen consequences may be quite damaging. With necessity comes invention, but up to what extent? In a corporate setting, employees can immediately be fired if they cause a chain of events that impact the company negatively. However, when AIs take over these positions, there would be difficulty in identifying the source of the error. There is total lack of clarity when it comes to these scenarios. Who will take responsibility if markets fall? The technology for AI and machine learning is developing at a fast pace, and it is a little difficult to institute rules to make sure that these artificial beings do not cause harm to the economy or to humanity itself.

Sources include:

Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.

Get the world's best independent media newsletter delivered straight to your inbox.

By continuing to browse our site you agree to our use of cookies and our Privacy Policy.