Capable of performing over a dozen operations per clock cycle per chip and scalable to hundreds of chips, the Symbiotic Quickfire was developed by Intesym under commission in 2001 as a general-purpose 64-bit triple-core microprocessor for workstations, servers, and mainframes.
Being highly parallel in nature, Quickfire is well suited to multiuser systems with many concurrent background services, and lends itself particularly well to being the central server in a “Thin Client” model with many heavy duty applications being run on that server.
Any applications requiring large-scale parallelism or very closely-coupled parallelism will benefit from Quickfire. Examples include monte-carlo simulations, analogue/digital circuit simulations, numerical analysis, neural networks, and data mining.
One of the largest persistent problems facing computer designers, software programmers, system administrators, and end-users, is the increasing complexity of large systems. There are immense performance overheads caused by the inefficient nature of conventional computers when two or more are expected to work together, such as in clusters. Conventional computers are not designed for working together, they are designed as standalone systems with additional networking or bus-sharing hardware, complete with extra layers of device drivers and task management utilities. In contrast, Quickfire processors co-operate at the most basic hardware level, eliminating the extra hardware, drivers, and abstractions associated with multiprocessing, and greatly simplifying the task of developing software.
This simplification in the hardware not only reduces complexity in the software, it can also reduce software development times, lessen the chances of bugs, and thus can reduce development costs and bring forward deployment dates.
Quickfire, like all Symbiotic architectures, is inherently parallel in nature and the programming model is both efficient and easy and releases programmers from the strait-jacket world of sequential programming and gives them a freedom to express algorithms and methods in ways not possible with conventional computers.
As an added bonus, this freedom can be achieved with well-known, widespread, and industry standard programming languages and operating systems, removing many barriers to true high performance computing.
A parallel computing architecture, scalable from embedded systems to supercomputers, efficiently handling fine-grain concurrency levels of hundreds of thousands.
Variants include 16- to 64-bit general purpose systems, transmuteable instructions, and arbitrary precision arithmetic.
A case for a new architecture