How to configure WireGuard on OpenWrt/LEDE using LuCi

A while ago, I simplified the way WireGuard interfaces are configured with in-tunnel IP addresses.

So here is a new step-by-step guide on how to configure a WireGuard tunnel on OpenWrt/LEDE. WireGuard is a cryptokey routing protocol, or, as many refer to it a VPN.


For this guide I assume you run the latest snapshot of, let’s say LEDE. I will also assume that you have a basic understanding of WireGuard.

First step is to create the WireGuard interface. Go to the Interfaces page and create a new interface. Select WireGuard VPN in the dropdown menu. If this option does not show up, then you are missing luci-proto-wireguard 💩. Head over to Software and install it.

Think of good name for the interface, in this article we will proceed using foo 😬 Next thing you will see is the interface configuration page. I tried to make it as self-explanatory as possible by including helpful hints below the options. Most important configuration data are the Private Key of the interface and the Public Key of at least one peer. Also, don’t forget to add one or more Addresses and the network or address of the other end of the tunnel to Allowed IPs. Otherwise the tunnel won’t work as expected.

If you like to add some post-quantum resistance, you can do so in the advanced tab.

Click Save and Apply once you are satisfied.

Now you should have a WireGuard tunnel interface

I also created a monitoring module. It is called luci-app-wireguard and should be available in all major repositories. Why not give it a shot while you are at it?

You can also check on your WireGuard interface(s) using wg on the command line.

If you find any bugs, please report them. Thanks for reading and happy cryptokey routing everyone!

Hint On some devices it may be necessary to restart the device after after installing luci-proto-wireguard, so that the netifd daemon correctly loads the helper script that comes with wireguard-tools.


The former approach required an static interface on top of the WireGuard tunnel interface. Unfortunately, this was introduced to address concerns that were raised in the merging discussion on luci-proto-wireguard. I never was a big fan, but saw it as a necessary evil to get the change merged in time. #politics It’s all history now 🙃

Update (July 2018)

I receive quite a few emails on the topics of OpenWrt and WireGuard every week. Unfortunately, I do not have the time to answer all of them individually. So I kindly ask you to direct questions regarding WireGuard and OpenWrt/LEDE to the OpenWrt Forums or to the WireGuard Mailing List. There the questions will be exposed to a wider audience and may additionally help other people facing the same challenges. Thank you!

Project sixfw - Lessons Learned

tl;dr: I am closing the SIXFW project. It hurts, but having learned from it makes it easier.

A bit over a year ago, at the Chaos Communication Congress 2015, I was part of the team responsible for the NAT64 part of the congress network. We usually used a commercial appliance to do the IPv6-to-IPv4 translation. Running super-expensive carrier equipment is at the heart of our operations, but whenever possible we like to deploy open-source software or equipment developed by the hacker community. There was no NAT64 appliance that fell under this definition at that time and I think as of today there still is none. At least none that I am aware of. So I decided to do something about that and initiated the SIXFW project with a clear objective:

An easy-to-use, non-bloated firewall/NAT64 appliance that thinks IPv6 first.

In retrospect, this was a hell of a objective.

I gave a lightning talk about my idea and the day after a couple of interested people gathered providing opinions, tips and ideas. Up to this day, however, I never released anything that comes close to my goal. Here is how and why I failed:

Lesson 1: Scope!

The attentive reader may have seen the contradiction in our objective already. A contradiction I did not see back then, though. When we aim for ease of use, we can hardly address the professional market for NAT64 appliances. Commercial NAT64 appliances come with a lot of configuration options, there is a knob or switch for everything. They pose a frustratingly high barrier for network engineers who are new to NAT64 networking. Lowering that barrier was one of my goals. This meant less features, less options to chose from and less ability to adapt to the very network the appliance should serve. Eventually, turning the professional appliance into a consumer router with NAT64 capabilities. I wanted to develop an appliance for a complex technology (which has it’s nifty caveats) that could be used by the average grandmother. I did not scope my project properly. I did not have a clear definition of the target audience.

Lesson 2: Do your research!

I quickly settled for OpenBSD as the underlying operating system. Partly because I liked the stability of it, and partly because I believed that there are already enough Linux- or FreeBSD-based firewall distributions around. What I did not do was proper research. I should have asked myself which operating system really is the best for the problem, taking into account the ecosystem, packaging infrastructure, release schedules and security patching processes. If I had approached this topic more open-minded, I may or may not have chosen another operating system.

Lesson 3: Do not re-invent the wheel!

It sounds so simple, but it is actually hard. When you aim for a very lean software product and begin your research on existing components to include, you see bloat everywhere you look. For sure I wanted to avoid bloat! But was every feature of a component I would not use already bloat? Harmless code that never gets executed but has to be shipped or patched out scratches on the image of perfect. Nevertheless, writing everything from scratch just to have the best-fitting solution comes with a lot of work and introduces new problems. For example, I did write a web interface and a restful API for configuring the firewall. To be precise, I wrote a piece of software that would read a definition of an API and produce static code for a server (python) and a client (HTML5, CSS, JavaScript) that implement this very API. Meta-programming if you will. Beautiful, but totally unnecessary. As so often: Done is better than perfect!

Stepping back

After a year of development, I began to realize that I would not have my prototype ready for the 2016 Chaos Communication Congress. Damn! Instead of panicking, I did something that often helps me to clear my head in messed-up situations: Stepping back and start dissecting the ashes.

It took me a couple of hours to clear my head and evaluate the code, the objective and the possibilities. Then I decided to try something different: Why not take an existing firewall distribution and turn it into a NAT64 appliance. Just for the fun of it, just to see what it would look like, just to learn from how else I could approach the problem. So I took a look at OpenWrt/LEDE and started adding Jool to make it run NAT64 in kernel mode (there were user space tools for that already, but they are not performing well). I then extended the existing Unbound DNS package to support DNS64 configuration options. Now there were NAT64 and DNS64 capabilities in a well-established distribution. And for the most common scenarios, they were even configurable via web interface! It was a great pleasure to see such success in such a short time after spending too much time and resources on developing a similar solution from scratch.

This happened close to the end of the year and it was about time to think about the event’s NAT64 network. Again, stepping back and trying another approach helped me clear my mind. I ended up writing a small Go program that would generate all necessary configuration files to turn an freshly installed OpenBSD into a NAT64 appliance. No web interface, to fancy bling-bling, just plain and reliable configuration files. It worked very well!

What’s next?

So, where will I go from here?

  • I will continue to contribute to OpenWrt/LEDE. Especially in the areas of IPv6-only networking and NAT64. There is still a lot to be done in this field. I’d rather contribute to a successful project that actually helps people than swimming in my own soup alone. I see more potential for learning for all of us when IPv6-only networking is accessible to a broad range of interested people.
  • Since NAT64 works surprisingly well and reliable on OpenBSD, I will keep my little Go program and generate updated configuration files every time I need to deploy an appliance-like server for NAT64. But no further energy will be invested into making it a fancy, shiny pet. It works and that is good enough in this case.
  • I will be shutting down the SIXFW project, including it’s website, Twitter and GitHub accounts.

Thank your reading this and certainly thanks to all of you who supported the SIXFW project in one way or the other. I learned a lot and hope sharing my lessons learned helps someone else some day, too! What a ride!

Bonus Material

Although I will not proceed with SIXFW but shutdown the project, I wanted to think about how I could have done it better. So I imagined myself being back in December 2015 and starting the project. This time by writing a project outline to explain the project, divide the work and conquer it.

The Problem

The Internet suffers a severe sickness called IPv4 address exhaustion (see appendix for details). It took the Internet community a while to develop and apply the cure in form of a new Internet Protocol called IPv6. Today, the patient is stabilizing and IPv6 deployment rates have been growing more than linear. One problems remains, though: These two protocols are not designed to be interoperable. Once a network goes IPv6-only, connectivity to the old Internet is gone. All nodes in that network (end users, servers, everything) will be excluded from information that is not accessible via IPv6. The negative side effect of the emergency cure we just applied is the patient slowly loosing vision on one eye. As a believer in the free flow of information, I think this is a problem that needs to be solved.

The Solution

The scope of this project is to build a NAT64/DNS64 translating network appliance. This involves choosing a suitable hardware platform alongside a mature, well-performing operating system. The software will use the platform efficiently. However, it will not be bound to a specific product or vendor and will provide compatibility with a large range of products from different price categories. For example, SIXFW is expected to run on a low-cost computer as well as on a high-performance enterprise server. The software will be designed using multiple components. A user interface will provide statistics and system health data as well as allow configuration of operational parameters. Using a dedicated control daemon, system tasks will be separated from the user interface to provide an additional layer of security. Proper access management and sane defaults (read: preconfigured for most applications) will ensure that initial functionality will provided with minimal administrative overhead.

Target Audience

The SIXFW appliance will be useful for anyone who uses or maintains a state-of-the-art network connected to the Internet. Although the vast majority of end users will not know that their information flow to the old Internet is being translated, they will benefit from the existence of an open-source ready-to-use NAT64/DNS64 appliance. More tech-savvy users may want to use SIXFW in their home networks alongside existing customer premise equipment (CPE, e.g. a Fritz!Box). Organizations may chose to prefer open-source solutions for their connectivity for security or budgetary reasons. As SIXFW matures, it may operate side by side with enterprise appliances at internet service provider level, transparently serving hundreds or thousands of people. SIXFW significantly lowers the barriers to access information that would otherwise be invisible. Everyone who regularly accesses wired or wireless networks and varying locations with different uplink capabilities is the target audience. If you use the Internet, you are part of this project’s target audience. However, I will do my best to avoid you noticing it, so that you can access the information you want without disturbance.

Project Risks

Reaching mile stone EVALHW depends on the availability of different hardware platforms. Supply shortage may lead to delays. This risk has been mitigated by an upfront investment in hardware. I have at least three platforms to evaluate, independent from market availability.

Mile stone ALPDEV consists the major development part of the project. Due to its highly integrated nature, single components are expected to be develop in parallel using an incremental approach. Further division of this mile stone into smaller tasks is intentionally avoided. This poses a risk on the overall project, as time estimates may be significantly off. Furthermore, unforeseen complexity issues or unexpected inter- dependencies could add additional workload.

The event, which will provide the environment for testing taking place to reach mile stone ALPTST, is in an early planing stage. I follow the progress of the planning committee closely to get early notice of relevant changes. For example, the event could be canceled due to funding issues or external threats like harsh weather conditions. I plan to mitigate this risk by choosing another test environment, e.g. an indoor conference, if necessary.

Milestone EVALHW

  • Evaluate hardware platforms (3-4) and operating systems (2-3) for reference implementation
  • Install different OS on different platforms
  • Document experiences issues and caveats
  • Document opportunities
  • Evaluate how fit-for-purpose each platform is
  • Investigate possible OS and security update workflows
  • Assess NAT64/DNS64 installation routines


  • Hardware platform evaluation result matrix
  • Operating system evaluation result matrix

Milestone REFDEFV

Develop reference implementation

  • Install chosen operating system on chosen platform (clean install)
  • Manually install and configure NAT64/DNS64 services
  • Manually install and configure router services
  • Develop sane default firewall rules (as a starting point for further improvements later)
  • Run client connectivity tests with major mobile and workstation operating systems


  • Reference implementation source code
  • Full documentation on how reference implementation was build

Milestone REFTST

SIXFW reference implementation field test

  • Deploy and integrate reference implementation at 33C3 hacker conference network (I am member of the network operating center and will be responsible for transition technologies in the NAT64 part of the network)
  • Document and fix bugs that show up during real-life deployment (~2 days)
  • Document bugs, issues and collect anonymized usage data for later analysis during operational phase of the conference (~4 days)


  • Short, summarizing field test report
  • Updated source code of reference implementation

Milestone ALPDEV

Develop alpha release

  • Design solution architecture
  • Proposed architecture contains: control daemon, user interface, statistics daemon, update functionality, user interface security and access management


  • Source code of alpha release

Milestone ALPTST

SIXFW alpha release field test

  • Deploy and integrate reference implementation at the next outdoor hacker event network (I am member of the network operating center and will be responsible for transition technologies in the NAT64 part of the network)
  • Document and fix bugs that show up during real-life deployment (~2 days)
  • Document bugs, issues and collect anonymized usage data for later analysis during operational phase of the event (~4 days)


  • Short, summarizing field test report
  • Updated source code of alpha release

Milestone INFSTR

Build supporting infrastructure

  • Develop tools for release management
  • Develop tools for building and signing of pre-configured ready-to-go releases
  • Create initial documentation
  • Design and update the website to reflect the project (current page content is misleading and not in line with the project goals)


  • Tools
  • Documentation
  • Updated website

Joined Intelligence: The Machine in the Monkey...

Dan’s thoughts about the future co-existence of natural and artificial intelligence.

Natural Intelligence

Biological brains feature an unprecedented computing power while maintaining remarkable energy efficiency. This is a product of evolutionary trials and errors, mutation and natural selection, that took place over a timespan of hundreds of thousands of years. In mammalian brains, the evolution of the neocortex increased the voluntariness of actions, improving social harmony within the species and enabling the development of culture and technology.

Our species maintains the most elaborate form of general intelligence, performing particularly well at higher functions such as sensory perception, generation of motor commands, spatial reasoning, and language (Kriegstein 2011). This natural intelligence is, however, limited by the rate of cerebral metabolism, neural interconnectivity and the generations-spanning bio-evolutionary development.

Artificial Intelligence

Artificial intelligence is (currently) based on integrated circuit computing. It utilizes time multiplexing to simulate a level of interconnectivity and parallelization, beyond the underlying integrated circuit’s actual capabilities.

Integrated circuit computing grew exponentially, but will eventually face physical limits (Brock 2005). With ever-decreasing transistor sizes finally requiring the mass production of single-atom transistors, the current platform for artificial intelligence could sooner or later cease to grow in efficiency.

Joined intelligence

Due to fundamental differences in architecture and modus operandi, these two types of intelligent agents are best characterized and compared by recognizing individual strengths and weaknesses. Natural intelligence masters the art of general cognition, but is regularly beaten by artificial intelligence in terms of speed and precision for mathematical operations. Artificial intelligence is a champion of calculus, yet it routinely fails at cognitive tasks considered simple for natural intelligence; for example, anticipating behavior based on sentience and emotions or unsupervised learning of concepts from small data sets.

Together, natural and artificial intelligence surpass each other’s limitations. Joining the powers creates a joined intelligence, with capabilities beyond the sum of both. By reasonably integrating one with the other, where it makes sense and seems ethical, we can go all the way from physical co-existence to deeply integrated co-operation of bio-mechanical and electro-mechanical machines.

Joined intelligence is about further blurring the already permeable boundary between natural and artificial intelligence. Three recent socio-technological developments indicate that we are moving towards joined intelligence:

  • Delegation of human thinking to narrow intelligent systems.
  • Freeing robots from their safety cage, creating a co-operative human robot workplace.
  • Integration of smart medical devices into human bodies.


Hand-sized devices, with the computing power of last-decade’s room-filling super-computers, complement our biological brains daily. We have expanded our minds and outsourced some of our thinking to the mobile computers in our pockets, often reaching out to the even more powerful intelligence that is the cloud (Kurzweil 2012). We delegate information processing and decision-making to digital assistants and services. They advise us whether we should take the bus or the train to get from A to B, remind us to take an umbrella before leaving home in the morning, or make reservations at our favorite restaurant based on calendar appointments. Human natural intelligence expands its capabilities by delegating thinking tasks to one or many artificial narrow-intelligent agents.


New generations of industrial robots become more aware of their surroundings than their predecessors. They evolve from working alongside humans to employing a real co-operative paradigm, not only supporting human workers with strength and precision but also adapting to changes and learning through physical feedback and guidance. Some of these gentle giants have been freed from their safety cage, and more will follow as their cognitive capabilities increase. Human-robot co-operation redefines workflows in factories and workshops, enabling natural and artificial intelligence to work towards a common goal by using their different strengths to eliminate the other’s weaknesses.


Integrating external entities into the human body is nothing new. Essential mitochondria, the organelles often described as the powerhouses of the cell, do not share our DNA, offering strong evidence for a cellular symbiosis established millions of years ago. Insulin pumps and artificial cardiac pacemakers also form a special symbiosis with the human body as these smart devices keep patients alive, while patients take care of software updates and recharge their batteries. One of the deepest forms of integration is the cochlear implant. Over time, the brain adapts to the implant’s electrical signals by physically rewiring neurons. The data pre-processing artificial intelligence and natural intelligence put forth a combined effort to enable the higher function of hearing (Mauger 2014). Modifying the human body with devices containing artificial intelligence is at the beginning and not restricted to medical applications.


Joined intelligence is a powerful yet dangerous concept. At the heart of every joined intelligence stands the alignment of goals. It would be a catastrophically naïve move of any natural intelligence to join forces with an artificial intelligence of orthogonal goals.

In a future of deeply integrated, joined intelligence, we will be the machines and the machines will be us.