Google Brain Creates Neural Networks that Write Their Own Encryption Key

Google Brain creates neural networks that can write their own encryption key and communicate securely

It looks like achievements in the artificial intelligence deep learning space and coming in fast and furious these days. Shortly after Microsoft boasted developing the world’s most accurate speech recognition system, Google has now created a pair of deep learning neural networks that write their own encryption – and continually keep improving on it.

Meet Alice and Bob – not your average, everyday couple. These two are neural networks with names, and they were created to find out if neural networks could create their own encryption algorithms and then use that to communicate securely with each other. The results were shocking, to say the least. Not only did they successfully create a new encryption key, but it was so advanced that even another neural network – called Eve – was unable to break the code.

In fact, while Eve was trying to break Alice and Bob’s encryption key, they kept on improving it at a faster rate than Eve was able to make progress on the decryption part.

With this experiment, Google has proved one thing: that computers, like humans, are much better at creating codes than cracking them. And it is of tremendous significance that these artificial intelligence programs are able to reflect human abilities to this degree.

This isn’t the first time a “robot being” is mimicking a human trait, however. NASA has been developing such capabilities for over a decade now, using what it calls “collaborative control” and reasoning. This allows robots to ask questions of humans and then use complex reasoning algorithms to develop their own skills. A robotic apprentice, if you will.

In fact, there are several attempts being made around the world to instill human characteristics into robots. For example, at the University of California, they’re working on introducing human “personality flaws” like fear into robots. They succeeded in creating a robot that would literally be paralyzed with fear and refuse to go across a room – exactly the way a mouse being put in a strange room would “hug the walls” and refuse to venture into the middle of the room without becoming comfortable with its surroundings.

Two years ago, in Japan, two female androids – or gyndoids, as they’re being called – shocked a room full of news reporters by delivering a live broadcast of the day’s news. ‘kodomoroid®’ and ‘otonaroid®’, as they are called, are now in a museum exhibit to show people how robot behavior contrasts with human behavior.

In 2014, the Office of Naval Research decided to run a five-year research program to allow universities to build human morals into artificial intelligence entities. Essentially, they want researchers to be able to give robots a sense of right and wrong. That seems a bit presumptuous, to be honest. Does the U.S. military actually know the difference between right and wrong? Don’t they just follow orders? Why would they need a robot that can question a direct order and evaluate the rightness or wrongness of that order?




In a recent report from last week, we wrote about theoretical physicist and cosmologist Stephen Hawking and the new research center at Cambridge University in London. That facility has been charged with the responsibility of creating guidelines for AI development so it doesn’t get out of hand.

In that article we also saw how one ‘robot anthropologist’ considered our innate fear of robots to be merely an extension of humans fearing other humans. We also saw the real danger of autonomous weapons systems being susceptible to cyber attacks.

All of these things point to one thing: that the robots we create will be a reflection of our own abilities, skills, flaws and other qualities. In other words, the capabilities of an AI being will be guided by the capabilities of the humans that create that being.

If you really think about that, it’s a scary thought. The concept of a “mad scientist” creating a robot that can destroy the world suddenly leaps out of fiction and into fact. So what it boils down to is: who is creating our robots and autonomous systems, and are they fit to do it?

In the future we will have international regulatory bodies that define and outline the “proper” way to create robots – with top technology companies acting as consultants. There will be a ‘policing’ of the artificial intelligence development community to make sure humans toe the line when it comes to developing autonomous technologies. There will be guidelines, rules, regulations, protocols and best practices around AI development. We haven’t reached that stage yet, but you can clearly see that there is an urgent need for it right now.

In summary, while artificial intelligence entities do not seem to be a danger to us directly, all the evidence points to the fact that AI beings can potentially be used for harm depending on who creates and controls them. And that’s the challenge that we, as humans, will continue to face over the next few decades.

Thanks for reading our work! Please bookmark 1redDrop.com to keep tabs on the hottest, most happening tech and business news from around the world. On Apple News, please favorite the 1redDrop channel to get us in your news feed.