Latest project source code (ZIP): quantos.zip
Reading / references that I found useful:
QUANTOS is not fast. No simulation of a quantum computer really is, but QUANTOS especially so. I deliberately did not try to optimise anything, favoring small succinct / naive implementations to fast, optimized ones. It is single-threaded, and simple. I wrote QUANTOS for the educational value. It is small, currently just under 1000 lines, and therefore easily comprehensible. A single competent Lisp programmer should be able to ingest the source, and be well-along into puzzling over the strangeness of quantum mechanics in short order.
If I, or you, really wanted a fast quantum computer emulator, I wouldn't hesitate to run QuEST, Qiskit, or QVM simulations. For most practical work designing quantum algorithms, they are definitely the better option.
Interestingly, however, QUANTOS does directly benefit from Common Lisp in many respects. For example, when expressing certain quantum algorithms, leveraging CL's macro system may make a lot of sense. A Lisp macro could take an integer argument, call it N, and output the "quantum assembler" necessary to implement an N-bit full-adder. This could allow families of quantum algorithms to be expressed very easily.
That said, what I really wanted was some simple source code to go over to help me better understand some of the deeper fundamentals of quantum computing, along with the associated mathematics. Not finding much available (or at least what was available was incomplete, poorly designed or in a language that I hate), I decided to start out coding.
The source is heavily commented, explaining both the structure of the code, but also some of the mathematics. It should be useful both as a gate-level quantum simulator, ala Qiskit or QuEST, but also as a way to get a level deeper, and to define custom quantum gates.
The QASM-style language is not 100% complete yet, so care must be taken to individually address qubits and ranges (i.e. there is no named bit-sets yet). Some other miscellaneous, but trivial functionality is missing as well, such as including other files. This will all be rectified in 0.6.
After that, its a straight-run to IBM QASM-compatibility, and importation/translation of the standard QASM primitives. This is somewhat unnecessary; as QUANTOS has most of the primitives already as built-in operations, but working through the IBM libraries of QASM code will provide a good test-bench to make sure things are functioning properly.
QUANTOS work has stalled in the medium-term. I've preserved the road-map below to show project goals, but I am focusing on the *Lisp implementation. Once completed, I intend to re-implement the QUANTOS mathematical primitives in *Lisp to take advantage of multi-core, distributed and GPU-accelerated primitives that I'll have available. While speed isn't a major goal, when the simple primitives are implemented against an accelerated *Lisp base, things should become quite interesting.
Version | Milestone | Status |
0.1 | Basic mathematics and matrix functions | Done |
0.2 | Gate definitions and generator functions for 1-bit gates | Done |
0.3 |
| Done |
0.4 |
| Done |
0.5 |
| Done |
0.6 |
| In progress |
0.7 | Full lexer for IBM-spec QASM to my S-expression semantics | Future |
0.8 | Import IBM QASM examples & libraries | Future |
0.9 | Iterate 0.7 to 0.8 until all QASM test examples work | Future |
1.0 | Get it into the quicklisp repo | Future |
Way back when, in the 1990s, I was fascinated with the then-current state-of-the-art, multiple Top-500 placed CM-5 supercomputer, and in particular, the Lisp dialect developed for it known as *Lisp.
Thinking Machine's corporation developed a single-threaded simulator for it, which was meant to be run within a conforming Lisp implementation on a workstation.This would provide a programming environment similar to that of the CM-5. Of course, on the machines of the day it was quite slow, and one could not simulate a CM with very many processors. But it was sufficient to debug your algorithms when running on small test-cases before moving on to using up time on the very expensive real deal.
I was familiar with the simulator, having used it a long time ago when I was first learning Lisp. I decided to revisit it, and to dig into the code. Initially, I had 3 objectives:
I have since taken on a fourth goal: to GPU-accelerate portions of the base code as well.
This later goal came to me by happy accident: I found the Bitsaver's software archive, and was able to reconstruct the tape images and get a bunch of original CM software, including source code to the CM graphics and scientific libraries.
Goals #1 and #2 are essentially complete, though there are some circular dependency issues that make an automated build difficult, and there are some strange errors, where SBCL is not able to (compile-file), but IS able to (load) it. I'm not sure if resolving the dependencies will fix all that, or if there is something deeper going on with SBCL itself.
Work on the distributed framework for running on a number of nodes is partially complete, but nonetheless good enough to write distributed applications using it. Declaring a P-var (parallel variable, essentially each processor of a CM got a chunk of an array to work on individually) allows it to be split up over multiple machines, and standard *Lisp primitives like (sin!!) work as expected. But there are some tricky issues I need to sort out with some of the other functions to insure compatibility. But, things work well enough so that the cellular automata from "Getting Started with *Lisp" works cluster-wide and distributed.
Another house-keeping situation is that the code is, currently, very much tied to my cluster hardware. I've begun the process of abstracting it out, and adding in support for other setups and arrangements, but it will take time to get something together for a general release. There just aren't enough hours in the week.
Maybe someday I'll get the full PRISM bench re-implemented for Common Lisp on a distributed cluster. Then things will change.
If you build it, they will come.
My cluster consists of 32 Jetson Nano 2GB models, each with 64GB flash disk, connected over gigabit ethernet. This is the primary development platform I have been using for the *Lisp simulator, and I have recently purchased more nodes. This time, 16 ODroid nodes, and 2 SOPINE Clusterboards (14 nodes total). This new hardware is meant to be something a bit different from the Jetson's, so that I can rewrite some of the clustering software, and make something that should work well for a variety of different computers.
Here are some pictures of the cluster Nano's themselves as I was assembling them in 4-board groups:
The quad-boards are each hooked up to an individual power-supply, and are arranged in 8-blocks within the half-rack chassis:
Finally, I also added in a single Raspberry Pi, with a hat to control an array of LED panels so that I can write up the necessary modifications to the *Lisp source to properly work with my lights/hardware.
This Lambda has some simple logic to determine the flow of the site. If URI's begin with /static/, this signals that the URI path should be pulled down from an S3 bucket, the HTML is pre-processed by Python, and finally delivered to the browser as HTML. This pre-processing handles re-writing local non-HTML links so that they are pulled from an S3 bucket, and including HTML into the main document to create this webpage. While presented to you, the viewer, as a single monolithic webpage, the different projects described here are separate HTML files locally.
If the URI begins with /site/, this signals that the URI should be pulled down from S3, but it is a Python script that needs to be invoked within the Lambda context. The output of this script is then delivered to the user. While I am not using this mechanism on this site just yet, I do have some plans for it.
Image content and file content is handled with direct S3 access, where the S3 items at play have been made publicly readable, and therefore the browser can just pull them in directly.
With just these simple mechanisms, I am able to easily create website content which is largely fail-safe. The infrastructure itself is also very inexpensive, and is a fraction of the cost of running a dedicated Apache server for such a purpose.
The downside, is that without the traditional Apache/web server availability, common web-apps like Wordpress and so on cannot run. So, with this set up I am responsible for coding up everything, mostly from the ground up.
But this is changing. Community Lambda layers for PHP, ImageMagick, and so on are quickly making it more and more likely that we'll soon be able to run seemless web apps on a serverless model, without needing very much custom coding to bring things together.