On pCell

Artemis recently announced that DISH was kind enough to lease spectrum in San Francisco to them. Now they can deploy the world’s first pCell (personal cell) network that has to proof itself outside the lab.

Scaling cellular networks has become a real challenge lately. From using more spectrum to evolving the standards (LTE even has the word evolution in its name!). Cells have become smaller und cell tower networks are today more dense then ever before, requiring more bandwidth per tower and still being a significant investment for the mobile data network provider in charge. The available spectrum is very limited, the density of cell towers does not scale well, even when providers share physical towers.

A disturbing number of different antennas and standards is used on a matured cellular network, and it is very unlikely that the oldest ones can be dropped soon. Too many devices, especially sensors and remotely controlled industrial machinery often uses the aged GSM standard.

While pCell as a technology is able to tackle these challenges in a very appealing manner, it comes with it’s own baggage. The basic idea is behind pCell is leveraging signal interference instead of avoiding and circumventing it. By sending out carefully crafted waveforms from different locations and making sure that they add up (think of subtraction being just a special kind of addition) to a specific waveform at a specific point in time and space the technology creates personal cells. The size of these cells can be very small, because a receiver usually does not care how the waveform looks like at other places than directly at its antenna(s). The pCell base stations therefore cooperate with each other to make sure the emitted signal does look like, for example, a LTE signal at all the hundreds of small personal cells they are creating. Artemis’ Steve Perlman claims these cells to be about 1 centimeter in diameter. This is smaller what we have seen in other approaches using the same basic idea, such as network MIMO or cooperative beam forming.

While Artemis likes to describe the approach as groundbreaking and revolutionary, it is well-founded in communication theory. There is no magic to the math behind it, but there is a ridiculous amount of magic, if you will, involved in making it work in the real world. They are not breaking with the Shannon-Hartley theorem, as one might suspect. They are just ending the very annoying fact that everybody is on the same channel, thus ideally proving the full channel bandwidth to each one of the personal cells. What I have seen from their demonstrations so far looked promising. For a start I would not care if I get only 80% of the channel capacity or the full 100%. As long as the 80% are all mine, it is a huge step forward.

But let’s have a closer look to the challenges that come with using interference instead of avoiding it:

  • pCell does not require cell towers. However, that is only half the truth. It requires a very dense network of base stations. These stations will be smaller and for sure cheaper than the average cell tower. Nevertheless, they must be installed, powered, maintained and connected to a powerful backbone network. Remote areas are not likely to be equipped with such technology and will still rely on cell towers. In dense areas, like metropole cities, where we already see cell towers every few blocks, pCell will make the network smarter and faster. The bandwidth gap between highly populated and remote areas will widen even more.

  • The base stations will have to cooperate to create the best signal that suites everyone in the area. This means there has to be a tremendous amount of data to be transferred back and forth either between the base stations or between the base stations and a central point. Regardless of the architectural approach, we are probably talking about tens to hundreds of GB/s in highly utilized areas. There has to be a very strong and low-latency network behind the network. For this is the most interesting part of the game.

  • Needless to say that all the bits and bytes flying through the fibers (fiber is the only way I can imagine to handle a pCell backbone network) need to be processed. Hopefully a lot of the processing can be done in the base stations itself using application-specific integrated circuits (ASIC). If the data needs to be calculated somewhere else, this will add extra latency to the whole network.

  • The network needs to know the position of the receiving antenna with a precision of 1 centimeter. This requires very precise location tracking and will only work if a fallback mode for bootstrapping joining devices is provided. The resulting centimetre-precise information on how and where a receiving device is being used is probably gold for researchers and advertisers.

Besides the challenges, there is one great benefit I want to point out: With pCell it is possible to run hundreds of small, individual cells. Each of it using the standard that suits it best. I see it as the evolution similar to software-defined networking (SDN) and network functions virtualization (NFV). Why deprecate old standards and invest in expensive hardware exchanges, when we can just virtualize whatever network we want? Old und rusty industrial control devices using GSM as emergency link? Not a problem for a fast evolving mobile network anymore!

If pCell can be proven in the wild and does scale as intended, it will be a revolution. Maybe not that much of a revolution in the history of technology, but a even more painful revolution to mobile data plans than the European Commission would be able to enforce :)