Symbiotics

Naturally parallel computing

Intesym uses the term Symbiotics to refer to its proprietary methods of parallel computation and its implementation of highly parallel microprocessors. A characteristic of a Symbiotic processor is that it has parallel execution as the very basis of its design. Typical benefits of this are:

  • New parallel processes can be created in single clock cycles.
  • Useful parallelism granularity can be as fine as to be measured in nanoseconds.
  • Concurrency levels can run into the hundreds of thousands.
  • Multiprocessing is a natural arrangement.
  • Eliminates the middleware, abstraction layers, scheduling, load-balancing, and other software inefficiencies associated with conventional parallel computing.
  • There is no concept of interrupts — I/O handling is concurrent with all other processes.
  • Real-time response to external I/O signals is a natural consequence.
  • Naturally energy efficient — electrical power consumption is a function of computational load.

Expandability

A further characteristic of a Symbiotic processor, which relates to its name, is that many of them can be connected and will automatically work together without software control or coordination. An upshot of this is that a system can be expanded in computational power simply by adding processing elements almost indefinitely.

Simplicity

The simplicity and ease of design make Symbiotic CPU/MPUs excellent alternatives to the industry-popular ordinary architectures. Symbiotics shines where high levels of parallelism and concurrency are desireable, and it is well suited to all scales from embedded FPGA to HPC ASICs.

Generality

Symbiotics places no specialisation on the function of the processors and can be used for completely general purpose architectures, e.g. Quickfire, to replace any conventional CPU. Of course, specialist arrangements are possible and easy to create, such as the Cortica Idemetric Processor. Thus Symbiotics is an ideal basis for any computational need, ranging from embedded microcontrollers through mobile and desktop units to mainframes and supercomputers, covering applications as diverse as communications, control systems, databases, artificial intelligence, simulations, and climate prediction, as well as everyday mundane tasks as web browsing and word processing.

In addition, the architectural generality is not merely limited to CPU functions, the techniques can also be applied to create advanced graphics processors (GPUs), audio processors, digital signal processors (DSP), and so on, with ease and simplicity.

Processing efficiency

The Symbiotic execution model is far superior to the conventional “von Neumann” models (as used by almost all of today’s commercial processors) because it naturally follows ideal patterns regardless of loading, mixture of tasks, mixture of parallelism granularities, and even aspects such as bandwidths and latencies around the system. The programmer and compiler do not need worry about run-time conditions because the system inherently “relaxes” into the optimal execution pattern without guidance.

Conventional processors incorporate many ‘advanced’ features such as branch prediction, speculative execution, superscalarity, and out-of-order analysis, but these are difficult to design, expensive to implement, and complex to tune — and they are famously prone to severe malware attacks.

In contrast, Symbiotic processors do not need such features because Symbiotics offers intrinsically more efficient, and safer, ways of achieving even higher levels of performance. The elimination of this hardware makes a Symbiotic system simpler in design, easier to use, cheaper to make, and allows resources to be applied to more important design aspects.

Energy efficiency

A Symbiotic processor consumes electrical power according to the computational needs. If there is no work to be done then it does not do anything (not even idling), and so it consumes virtually no power. This occurs as a consequence of the nature of Symbiotics and does not require any monitoring circuitry, clock variations, or voltage reductions, and thus does away with the complexities and inaccuracies of traditional power-saving schemes.

A case for a new architecture

The right tool for the right job

If a product for an application performs its function perfectly well then, by definition, it can not be improved upon, and so new technology is not required for that application. As such the concept of the right tool for the right job applies in computing as much as it does in any other field. Some tasks, such as word processing and web browsing, do not need computers anywhere near as powerful as even the cheapest PCs of ten years ago because they are trivial tasks that have not greatly increased in magnitude or complexity.

However, there are some tasks which become increasingly difficult over time, either because the problem increases (e.g. air traffic control) or because the problem is so huge that attempted solutions can absorb as much computing power as can be acquired (e.g. weather forecasting). In addition, there are also classes of problems which need more advanced technology, not to solve a problem faster, but to solve it with less effort, complexity, or cost. Most electronic systems can benefit from simplification, most commercial systems can benefit from lower cost, and most users can benefit from lesser effort.

In the context of the problems requiring new technology, the question needs to be asked whether the evolutionary path followed for the past two decades has provided the right tools for the job or whether there should be a revolutionary solution.

Revolution vs. evolution

Up until the end of the 1990s, computing speeds were increasing rapidly, roughly following Moore’s Law (doubling every 18 months). This was evolutionary progress, but it stumbled in the last years of the 20th century. When the rate of growth of computing needs exceeds the rate of growth of computing power then technological, scientific, and, ultimately, social developments suffer.

There is no point evolving if solutions to today’s problems are decades away. By the time the solutions arrive the problems will have moved on and even greater difficulties will lie in store. Evolutionary development has shown over the past decade that it cannot match demand, so a revolution has been required for many years now.

However, it is not sufficient to have a revolution if after the revolution there is merely evolution at the previous rate. The traditional doubling of power every 18 months is not a fundamental law but a consequence of the way the technology is applied. There is no fundamental reason why computational systems cannot improve faster, it is simply the case that computers the way they are currently designed cannot improve faster. Symbiotics addresses this situation and allows much faster growth in the future.

New jobs for new tools

Conventional modern computers are based on the past, not the future. Processors are designed to suit how the pre-existing software operates instead of designed to suit how new software could operate. Software is designed to suit the sequential nature of computers and so future computers remain sequential. Software is designed to gloss over the complexities of multitasking, multithreading, multiprocessing, and networking, and so future computers do not address these issues even with the fashionable “multi-core” approach. It is a vicious circle that people are desperate to escape but afraid to jump.

Once the decision has been made to adopt a new approach then many new possibilities open up. Ideas previously considered technologically impossible can become straightforward once a new tool is available. Intesym believes that Symbiotics is such an approach.