Bots behaving badly – and what to do about them

If the complexities of personal identification in a digital age are not head-spinning enough, what about the identification of bots? As we increasingly interact with computer programmes and as computer programmes increasingly interact with each other, so ensuring they can be trusted becomes essential.

Bots behaving badly

Bots can behave badly or stupidly, intentionally or otherwise. The Microsoft bot, TAY, is famous, but not in a good way. It was meant to learn from twitter and create compelling tweets but had to be pulled within a day because it became misogynistic, racist and obscene.

Similarly, there was the open artificial intelligence boat computer game, CoastRunners, where an AI-directed boat was meant to collect rewards by completing a course. However, it worked out that by going up a dead end lagoon and then smashing back and forth into the sides, it could earn more rewards than on its intended course.

OpenAI, which developed the Reinforcement Learning (RL) algorithm that powered the boat, reported: “Despite repeatedly catching on fire, crashing into other boats, and going the wrong way on the track, our agent manages to achieve a higher score using this strategy than is possible by completing the course in the normal way”.

Which is funny unless the malfunctioning RL bot is, for instance, part of an air traffic control system, a traffic management system or other real-world application.

Bots might unintentionally malfunction or they might deliberately reflect the unpleasant nature or illicit intentions of their developers. Bots can also be hacked or spoofed.

Rebuilding secondary education for refugees

Images credit: Talla Inc

Identifying and building trust for bots

So the question arises: Can you be sure that the bot that you are interacting with can be trusted, is acting as intended, has not been hacked or is not controlled by someone pretending to be the bot?

In terms of possible solutions, Botchain, a subsidiary of Boston, Ma-based Talla, is claiming an open source protocol, akin to internet certificates, that will provide identity authentication for bots or for artificial intelligence of any kind.

A governance board of nine individuals will do the authentication which, founder, Rob May, admits isn’t ideal. However, those nine (Talla has one of the seats) are in place for three years and part of their remit is to come up with a better governance model so that they will no longer be needed.

For every activity by an authenticated bot, a hash is written to the blockchain to provide a full, transparent audit trail. A number of bot developers have already signed as partners, including zoom.ai and gupshup (the latter develops bots for, among others, Coca-Cola); E&Y is lined up to audit the software.

There is a need to promote intelligent agents, says Joe Pindar, director of product strategy at digital security specialist, Gemalto. He sees the distributed ledger technology of blockchain as playing a role. Integrity, accountability, availability and auditability “all come with blockchain for free”, he says. There will be the need for identification for bots along the lines of that of humans. Solutions “are emerging but I don’t think there are very good at present,” he says.

More generally, Pindar expects a reversal of the current drive to centralise IT, as the Internet of Things (IoT) and other technologies bring the need for high-latency responses (such as for virtual reality), with computing power moving instead to the “edge of networks”. Only the results or decisions will be sent back to the cloud, thereby saving massively on bandwidth. “We see it ill be distributed and blockchain is the most likely solution to do key aspects of that.”

Pindar is co-founder and board member of the Trusted IoT Alliance. Set up in September of last year, it describes itself as “Working at the intersection of blockchain and IoT to build an industry of secure and resilient devices providing data services that you can trust”.

The whole area of IoT and bots throws up lots of dilemmas but the ability to identify and trust the computer programmes seems a fundamental starting point. If this can’t be achieved, then there is huge potential for unintentional or malicious chaos.
By |2018-07-30T09:00:47+00:00Jul 30th, 2018|Society|0 Comments

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

If You Enjoyed This Post
Join My Newsletter
Subscribe
Give it a try, you can unsubscribe anytime.
Close