Joined Intelligence: The Machine in the Monkey...

Dan’s thoughts about the future co-existence of natural and artificial intelligence.

Natural Intelligence

Biological brains feature an unprecedented computing power while maintaining remarkable energy efficiency. This is a product of evolutionary trials and errors, mutation and natural selection, that took place over a timespan of hundreds of thousands of years. In mammalian brains, the evolution of the neocortex increased the voluntariness of actions, improving social harmony within the species and enabling the development of culture and technology.

Our species maintains the most elaborate form of general intelligence, performing particularly well at higher functions such as sensory perception, generation of motor commands, spatial reasoning, and language (Kriegstein 2011). This natural intelligence is, however, limited by the rate of cerebral metabolism, neural interconnectivity and the generations-spanning bio-evolutionary development.

Artificial Intelligence

Artificial intelligence is (currently) based on integrated circuit computing. It utilizes time multiplexing to simulate a level of interconnectivity and parallelization, beyond the underlying integrated circuit’s actual capabilities.

Integrated circuit computing grew exponentially, but will eventually face physical limits (Brock 2005). With ever-decreasing transistor sizes finally requiring the mass production of single-atom transistors, the current platform for artificial intelligence could sooner or later cease to grow in efficiency.

Joined intelligence

Due to fundamental differences in architecture and modus operandi, these two types of intelligent agents are best characterized and compared by recognizing individual strengths and weaknesses. Natural intelligence masters the art of general cognition, but is regularly beaten by artificial intelligence in terms of speed and precision for mathematical operations. Artificial intelligence is a champion of calculus, yet it routinely fails at cognitive tasks considered simple for natural intelligence; for example, anticipating behavior based on sentience and emotions or unsupervised learning of concepts from small data sets.

Together, natural and artificial intelligence surpass each other’s limitations. Joining the powers creates a joined intelligence, with capabilities beyond the sum of both. By reasonably integrating one with the other, where it makes sense and seems ethical, we can go all the way from physical co-existence to deeply integrated co-operation of bio-mechanical and electro-mechanical machines.

Joined intelligence is about further blurring the already permeable boundary between natural and artificial intelligence. Three recent socio-technological developments indicate that we are moving towards joined intelligence:

  • Delegation of human thinking to narrow intelligent systems.
  • Freeing robots from their safety cage, creating a co-operative human robot workplace.
  • Integration of smart medical devices into human bodies.


Hand-sized devices, with the computing power of last-decade’s room-filling super-computers, complement our biological brains daily. We have expanded our minds and outsourced some of our thinking to the mobile computers in our pockets, often reaching out to the even more powerful intelligence that is the cloud (Kurzweil 2012). We delegate information processing and decision-making to digital assistants and services. They advise us whether we should take the bus or the train to get from A to B, remind us to take an umbrella before leaving home in the morning, or make reservations at our favorite restaurant based on calendar appointments. Human natural intelligence expands its capabilities by delegating thinking tasks to one or many artificial narrow-intelligent agents.


New generations of industrial robots become more aware of their surroundings than their predecessors. They evolve from working alongside humans to employing a real co-operative paradigm, not only supporting human workers with strength and precision but also adapting to changes and learning through physical feedback and guidance. Some of these gentle giants have been freed from their safety cage, and more will follow as their cognitive capabilities increase. Human-robot co-operation redefines workflows in factories and workshops, enabling natural and artificial intelligence to work towards a common goal by using their different strengths to eliminate the other’s weaknesses.


Integrating external entities into the human body is nothing new. Essential mitochondria, the organelles often described as the powerhouses of the cell, do not share our DNA, offering strong evidence for a cellular symbiosis established millions of years ago. Insulin pumps and artificial cardiac pacemakers also form a special symbiosis with the human body as these smart devices keep patients alive, while patients take care of software updates and recharge their batteries. One of the deepest forms of integration is the cochlear implant. Over time, the brain adapts to the implant’s electrical signals by physically rewiring neurons. The data pre-processing artificial intelligence and natural intelligence put forth a combined effort to enable the higher function of hearing (Mauger 2014). Modifying the human body with devices containing artificial intelligence is at the beginning and not restricted to medical applications.


Joined intelligence is a powerful yet dangerous concept. At the heart of every joined intelligence stands the alignment of goals. It would be a catastrophically naïve move of any natural intelligence to join forces with an artificial intelligence of orthogonal goals.

In a future of deeply integrated, joined intelligence, we will be the machines and the machines will be us.

Looping AIs (Siri, Alexa, Google Home)

There are plenty of videos showing how to trick Amazon Echo (Alexa) and Google Home into an infinite loop. Some folks used calendar entries, others rely on the good old Simon says trick. Since we share our realm with at least four AIs (that we know of), my wife and I decided to level up in the looping game.

It turns out, there are quite a few challenges to overcome when you want to integrate Siri or Cortana into the loop. Let’s start with a summary video. Skip to the end to see the outtakes 😅

Basic Idea

The basic idea is to make one AI say something that triggers the second AI to say something that triggers the next AI and so on… We used calendar events and notes. Mostly because you can easily manipulate them while the experiment is running. This also makes for a great game: Manipulate the AIs’ conversation without any human speaking. Just cloud data manipulation. It’s harder than one may think!

Training Siri

Siri has a cool feature: She doesn’t listen to everyone. To use Siri, one has to make her accustomed with ones own voice. This is done by speaking pre-defined phrases like “Hey Siri, how is the weather today?” Luckily, those phrases are not randomized but stay the same for every training session. I see an attack vector there. 😬

To make Google Home say the golden phrases we created an calendar event with the phrases as title. Unfortunately, Google Home speaks to fast (or computer-ish?) for Siri to catch up. We figured out that adding dots or commas slowed Google Home down a bit. At least enough for Siri to catch up.

Another caveat we found is, that Google Home loves to truncate long calendar event titles. We had to change the event title for every step of the training process. That was tedious and we tried several times until Siri was trained well. One time, we accidentally trained her to the phrase “HeySiri HeySiri”. 😂

This is how the calendar entry looked like:

Listening into the past

We discovered that Siri likes to listen into the past. When we made Google Home say something like “You have one appointment and the title is: Hey Siri…“, Siri would not start listen at the phrase “Hey Siri” or after the phrase “Hey Siri” but also grab a couple of phonemes from earlier to the activation phrase. Sounds scary, but what do we expect from an always-listening AI, right?

Trivia: Look at what time the screenshot was taken. Coincidence, I promise!

Training Cortana

We were not able to train Cortana, she would not listen to an artificial voice. It may be that the microphone wasn’t perfect on the laptop we used. Maybe Microsoft did very well on Cortana’s recognition algorithms and/or artificial neural networks. Since we were in a hurry, we threw Cortana out of the race. We leave this fruit hanging for someone else to grab it.

Note Yep, I am calling these things AI all the time. That is for convenience, I do know quite a bit about AI, natural language processing, machine learning and artificial neural networks. And I do know these gadgets are merely ANI (Artificial Narrow Intelligence) and far from AGI (Artificial General Intelligence) or what some may call strong AI. Still, I like to anthropomorphize and call them my AIs.

How to configure a WireGuard tunnel on OpenWrt using LuCi

[Update April 2017: I noticed people are still building configurations based on this outdated blog post. The way wireguard addresses interfaces in OpenWrt/LEDE has changed. Please consult a more recent blog post on the topic!]

A couple of months ago I worked on a concept for a sophisticated, IPv6-only overlay network spanning multiple sites and various devices. It is part of a a long-term project, which means assessing not only current, but also future protocols was suitable. The WireGuard cryptokey routing protocol was one of the candidates. The more I work with this still experimental protocol, the more I am convinced that this will become one of the major VPN protocols. It is lean and clean, easy to configure and exceptionally reliable. Furthermore, it seems to be very secure. But as a word of warning, I am less of a cryptography auditor and more of a programmer and network engineer.

I do believe in WireGuard and had the luck to participate in the project by contributing documentation and regularly testing the snapshots. It is a small, agile (BS Bingo!) and responsive group. The development speed is amazing and the head developer probably never sleeps 😮

Today I’d like to show you how to configure a WireGuard tunnel using OpenWrt/LEDE and luci-proto-wireguard. I developed luci-proto-wireguard during the past weeks as a side project. With the help from beta testers and experienced OpenWrt folks, the code matured and now awaits merging into the official repositories.

For this howto I assume you run the latest snapshot of, let’s say OpenWrt. I will also assume that you have a basic understanding of WireGuard.

First step is to create the WireGuard interface. Go to the Interfaces page and create a new interface. Select WireGuard VPN in the dropdown menu. If this option does not show up, then you are missing luci-proto-wireguard 💩. Head over to Software and install it.

Think of good name for the interface, in this article we will proceed using foo 😬 Next thing you will see is the interface configuration page. I tried to make it as self-explanatory as possible by including helpful hints below the options. Most important configuration data are the Private Key of the interface and the Public Key of at least one peer. Also, don’t forget to add the network or address of the other end of the tunnel to Allowed IPs. Otherwise the tunnel won’t work as expected.

If you like to add some post-quantum resistance, you can do so in the advanced tab.

In the firewall tab, you can create a new zone or assign the interface to an existing zone. I recommend doing this after the device is set up and working.

Click Save and Apply once you are satisfied.

Now you should have a WireGuard tunnel interface, but it has not been assigned an IP address yet. I wanted to allow a wide range of setups and enable everyone to do even the weirdest things with their routers. So I removed the direct addressing feature that I was implemented in an earlier version. Luckily, you can create a static configuration on top of foo by creating a new device and selecting Static address as protocol.

It is important to select foo as the underlying interface, either by finding it in the interface list, or, if it does not (yet) show up there, by typing @foo into the custom interface field.

Voilà! We now have the standard static addressing page. Configure according to your VPN concept and hit Save and Apply to proceed.

You should now see both interfaces in your interface list. I recommend putting them into the same firewall zone for easier administration. You can tell that I moved them into the same zone from the color of the interfaces. Interfaces foo and bar share the same firewall zone color.

I’d like to add some monitoring, but that isn’t ready yet. In the meantime, you can check on your WireGuard interface(s) using wg on the command line.

If you find any bugs, please report them. Thanks for reading and happy cryptokey routing everyone!

Hint On some devices it may be necessary to restart the device after after installing luci-proto-wireguard, so that the netifd daemon correctly loads the helper script that comes with wireguard-tools. Thanks Stefan for pointing this out!

Update (July 2018)

I receive quite a few emails on the topics of OpenWrt and WireGuard every week. Unfortunately, I do not have the time to answer all of them individually. So I kindly ask you to direct questions regarding WireGuard and OpenWrt/LEDE to the OpenWrt Forums or to the WireGuard Mailing List. There the questions will be exposed to a wider audience and may additionally help other people facing the same challenges. Thank you!