We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Understanding the Bull GAMMA 3 first generation computer through emulation

00:00

Formal Metadata

Title
Understanding the Bull GAMMA 3 first generation computer through emulation
Title of Series
Number of Parts
542
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
First generation computers emerged during WW2 and developed in the 1950s and were based on quite different from the present technology: vaccum tubes/germanium diodes for processing and delay lines for memory. Running those ancestors is extremely difficult given the few examplaries left, their delicate technology and the lost expertise. However at a logical level, their architecture is not so different of our current computers as theses ancestors quickly adhered to the then emerging Von Neumann architecture. This allow us to understand their behaviour on programs through emulating their instruction set and memory structure. This talk presents this process applied by the NAM-IP computer museum to revive the memory of the GAMMA 3, the first french computer build and sold by Bull between 1952 and 1962. After describing the machine structure and some existing emulators (assembly/javascript from ACONIT), we show how to progressively build a JAVA implementation by following the actual machine evolution from a core machine centered on an ALU operating on 7 decimal words and a 64 instruction panel to an enriched instruction set supporting a binary computation mode and external memories, including a magnetic drum of about 100KB. We then demonstrate the result on a few business programs recovered from that era. The talk also reminds about key concepts, the electromecanical context and how the transition proceeded to computers.
Latent heatPermanentParallel portPixelFocus (optics)Digital signalLocal ringGamma functionSoftwareContext awarenessEmulatorJava appletComputer architectureComputerBit rateMaxima and minimaCore dumpSource codeComputerLine (geometry)VacuumMachine codeHauptspeicherSoftware maintenanceRevision controlComputer hardwareTube (container)Read-only memoryNeumann boundary conditionExecution unitDivision (mathematics)MultiplicationNetwork topologyVacuumMultilaterationDifferenz <Mathematik>NumberAreaMachine codeCalculationPoint (geometry)Semiconductor memoryNeuroinformatikGroup actionConnected spaceEquivalence relationComputer programmingOrder (biology)Different (Kate Ryan album)Multiplication signComputer architectureGamma functionEmulatorDigital electronicsAdditionMechanism designFrequencyMereologyLine (geometry)MicroprocessorBitDrum memoryRead-only memoryProcess (computing)ComputerPunched cardComputer hardwareCubeSoftware developerCondition numberColor managementContext awarenessCore dumpVideo gameQuicksortForm (programming)Forcing (mathematics)Self-organizationInformationBeat (acoustics)Student's t-testSystem programmingHeegaard splittingForceData storage deviceECosNeumann boundary conditionRight angleComputer animation
CoroutineCalculationComputer programLatent heatTime evolutionGamma functionComputerPeripheralPlastikkartePunched cardTask (computing)WorkstationConnected spaceRevision controlExtension (kinesiology)Read-only memoryElectric currentConfiguration spaceEmailDecimalHexagonMachine codeAsynchronous Transfer ModeMacro (computer science)IntegerHauptspeicherGroup actionShift operatorMultiplication signDifferent (Kate Ryan album)Machine visionPoint (geometry)Revision controlExtension (kinesiology)Binary codeKey (cryptography)Goodness of fitAreaForcing (mathematics)Computer programmingNeuroinformatikRepresentation (politics)Matching (graph theory)Decision theoryIdeal (ethics)BitPosition operatorForm (programming)Metropolitan area networkObservational studyGroup actionDomain nameGame theoryBeat (acoustics)Semiconductor memoryAsynchronous Transfer ModeEvoluteDigitizingDynamic random-access memoryStructured programmingDecimalComputerComputer architectureGamma functionWordCodierung <Programmierung>Table (information)MereologyComputer animation
Data structureGamma functionMilitary operationWordReduction of orderSemiconductor memoryMultiplicationHeat transferRead-only memoryRange (statistics)Asynchronous Transfer ModeDecimalDivision (mathematics)Data storage deviceHydraulic jumpFlagDifferenz <Mathematik>FlagMereologyOperator (mathematics)MultiplicationInformationHeat transferRange (statistics)AdditionShift operatorComplex (psychology)Computer programmingSemiconductor memoryResultantRootBitMachine codeDifferent (Kate Ryan album)NumberType theoryPower (physics)Bridging (networking)Sound effectDivisorSurfaceHydraulic jumpReduction of orderMachine visionObservational studyCalculationExecution unitDecision theoryCharge carrierDivision (mathematics)View (database)Data storage deviceBuffer overflowWell-formed formulaTranslation (relic)MathematicsComputer animation
PlastikkarteMachine codeHill differential equationSelf-organizationDrum memoryShift operatorKeyboard shortcutOperator (mathematics)Point (geometry)Extension (kinesiology)Different (Kate Ryan album)Level of measurementOrder (biology)Computer animation
Machine codeEmulatorGamma functionWritingComputer iconOpen sourceVideo game consoleLibrary (computing)Asynchronous Transfer ModeInclusion mapCore dumpExecution unitSoftware testingJava appletData structureComputerData managementRead-only memorySeries (mathematics)Context awarenessHierarchyParameter (computer programming)OvalBootingBlock (periodic table)MechatronicsDifferenz <Mathematik>Price indexIntegrated development environmentSuite (music)Standard deviationInformationData recoveryDifferent (Kate Ryan album)Point (geometry)View (database)Operator (mathematics)Semiconductor memoryNeuroinformatikComputer programmingNumberEmailPrototypeIntegrated development environmentVisualization (computer graphics)SurfaceTable (information)GoogolLoop (music)Machine codeExtension (kinesiology)Division (mathematics)Series (mathematics)Connected spaceConnectivity (graph theory)Group actionRange (statistics)Data managementDrum memoryHeegaard splittingVirtual memoryHydraulic jumpImplementationHeat transferEndliche ModelltheorieSoftware testingResultantOrder (biology)Rational numberJava appletError messageEmulatorCuboidNetwork topologyBitSource codeVideo game consoleCarry (arithmetic)Computer animation
Machine codeContext awarenessSource codeEmulatorTwitterGamma functionRead-only memoryMaizePoint (geometry)Formal languageCompilerComputer simulationView (database)Content (media)Reading (process)Computer programmingMagnetic-core memoryAssembly languageMultiplication signEmulatorMachine codeImplementationRange (statistics)Different (Kate Ryan album)Limit (category theory)Core dumpAbstractionCycle (graph theory)PhysicalismCountingGame theorySemiconductor memoryForm (programming)ResultantSuite (music)User interfaceMenu (computing)
Program flowchart
Transcript: English(auto-generated)
Our museum is located in Amur, so it's not far from here, so if you have some time to come, you're welcome. So we have different missions, of course. One of them is to preserve old machines,
to show them to the public, and also to study those machines, to keep understanding them. So I might talk more precisely about that. Actually, why this machine? We have a big collection, part of our museum. It's actually a big mechanographic collection. You can see it here.
So we have a whole bunch of machines, electromechanical machines, that are still being maintained. Unfortunately, we don't have the bull gamma-3. It's very rare, but it was connected with those machines.
So we have many documentation about those machines, and we were interested to study that machine more specifically. So I will go through the historical context, make you discover the machine, and then go into it to try to emulate it,
looking at some existing emulators, and then detailing our own emulator and what we learn with it. So let's go back in time. We are here now in 2023.
So if we go back 70 years ago, just after World War I, the first generation of computers was developed. So at that time, the technology was very different than today, because there were no integrated circuits.
There were no CPUs, microprocessors. They were developed in the 70s. There were no TTL circuits. There were no transistors. There were no magnetic cores. Actually, when you really want to build a computer, you
had technology like vacuum tubes and delay lines to try to store some memory and drums. So it was really a very different technology, and of course you can imagine the memory was very small. So another point is that at that time, most of the processing was made because of course there was automation before the computer.
So most of the automation was done through electromechanical machines. So a tabulating machine, you know, it was developed in the end of the
19th century with the always tabulating machine, and then it became the IBM company. And you can see here that there was some kind of transition between that area and those machines, those computers that were starting to be developed.
And actually, the interesting point is that one that I will show you, actually, at the beginning it was not really a computer. It was still some kind of auxiliary calculator for a tabulating machine, that one that you can see in our museum.
And after, actually, it began to improve and the dependency between the machines were reversed. So the gamma tree became the computer and the tabulating became the peripheral.
So you can see other machines after, of course, you can see that they also developed a gamma-60, gamma-30 machine, but in the second generation, so I will not focus on that. So maybe next was them. And so how did we study the machine? Of course, we have documentation at the museum.
There is also a number of existing examples of that machine, one in Angers where it was built, in Grenoble, they acquired one and preserved it, and one in Frankfurt.
So, of course, we don't have one, but we have those documentation and we have also many documentation that was also provided by Akoni, which is another museum located in Grenoble. And there are a few emulators. I will come back to that later.
Have a look at the hardware. So, as I told you, it's a first generation computer. It's based on vacuum tube delay lines. Actually, the code was stored in a connection panel, so you can see it on the top there. So, in order to program it, actually, you had to plug the instruction to say, well, the first instruction, it
has four characters, but the fourth character, it's that decimal code, the second one is that code, and so on. So, it's really like that spaghetti coding, and for that reason, actually, that spaghetti coding was
also used in a tabulating machine, so it was the way to code at that time. And that's also the reason why we cannot really call it a computer in that form, because it does not follow the von Neumann architecture, because in that architecture you have to have the code inside the main memory.
Although somehow that panel was memory mapped, so you could consider it like some kind of read-only memory. What about the memory? The memory itself, actually, it was only seven registers.
And in order to keep the information, actually, the information, it was the equivalent of six bytes, so it's 12 characters of four bits. It was just circulating in a line with a regeneration system, so it's an LC circuit.
And for just one word, so for six bytes, you can see the device here, it's more than eight kilograms. So, you imagine the start of the... it was really very big.
About the computation, it was also based more on dyads, I will not go into all the details. It was mostly addition and subtraction, as I will see, the multiplication and division were implemented through iterative addition and subtraction.
And what about the frequency? The frequency was 2.5 hertz. Why that? Actually, the inner could go faster, but it was just because it was synchronized with the mechanical machine, with punch card, so it was limited by that part.
And you can see also there is a nice drawer, it's really easy to open, of course, for the maintenance, because when a vacuum cube had problems, you had to replace it and it was designed for that. So, is it a computer or a calculator? In French, we have different names, but
as I told you, we cannot really consider it as a computer for the first time, because of that it was not following the von Neumann architecture, and it was really designed as an auxiliary machine for the tabulating.
So, as you can see there, a quote from a guy who designed the machine in 1953. So, it's really an extension, and the good point is that the computation was so fast
that there was no delay by the calculation, so it was really transparent for the tabulating. And actually, at that time, the programs inside the machine were more like auxiliary computations that were augmenting the capability of the tabulating machine.
And there was evolution, that's the interesting point. There was a version, of course, that first version is only adding and subtracting integers, so there was a version that was able to do floating points. And then in 1957, there was a DRAM extension, that's the interesting point, it's about 100 kilobytes, and it could store the program.
So, from that time, we can say then that it's really the first French computer, and it's also the transition between the electromechanical device,
the electromechanical area, and the computer area. Also, another interesting point is that those first computers were not
using binary or hexadecimal representation, they were still computing in decimal. So, it's interesting because I found, it's in French, but it's transition there, there is a whole discussion about should we use decimal, or should we use binary or hexadecimal for computation.
So, there were some advantages and benefits and some disadvantages. So, you can see the advantage, oh, two figures, zero and one, it's really powerful.
For the relay, it's ideal to map, and for the disadvantaged weaponry, it's become a very long world, and we need to translate back and forth with the decimal. So, the conclusion, it's quite funny, we will use semi-decimal, which actually is the name for binary coded decimal, and introduce those coding on a 4-bit for the binary coded decimal.
So, that was for the first version, after they came back on that decision, and actually an update for the DRAM extension was able to support the binary, the full binary mode.
So, what do we have as memory? As I told you, we have those registers, actually we have seven main registers. You can see here a bit more, because there were extensions. So, a register is one word of 12 digits, 12 characters, so those 4 bits, so it's actually 6 bytes.
So, the main memory was only 42 bytes, so you see it's very limited. And if you look at the full architecture here, the gamma 3 with all the extensions,
you can see on the top left, the panel, the main registers are on the left. The top one, the M1, actually is the only one where you can read and write, so all the computation will be performed in that one.
And the other one, M2 to M7, will be used as a register to read operands. And the instructions, you can see, the decoding of the instructions, the structure of the instructions is composed of 4 parts, I will detail them after, it's called TO, AD, OD and OF.
And the rest are extensions, so this is more memory, so you can switch those registers with those ones. And the DRAM extension can also be mapped on those, they are called OCTAT, so you can load
a part of the program from the DRAM to those parts and then execute them into the computer.
So about the instruction set, you can see that there are 4 parts, the first is quite natural, it's just the type of operation, so you can have addition, subtraction, I will detail after. The second part is also quite natural, it's just the address, it means which operand we
will use, so for the addition, here we can see it means M4, the register number 4. And what's a bit different and weird is that then we have two other pieces of
information in the instructions that will tell you which range in the register you will manipulate. The reason is that the memory was, well, scarce, and so if you wanted to store two different information in the same register, you
could then address one part of it and you could really select if it was 2 bits and then 10 bits and things like that. So you can see here a very simple addition, so I can decode it with you, so this means a transfer from one register to the accumulator, so the M0 register.
So it's from M4, so you can see M4, we have two parts, A and B, so A is from 6 to 9,
and B is from 1 to 5, so the first thing is that we will load the part 6 to 9 into the accumulator, then we will ask to perform an addition with what we can find in the same register 4 in part 1.5, you can see 1.5 here,
and you can see that as an internal flag, it also remembers the part that is used for the shift part that it should use for the addition. And then you can perform the addition and it will have A plus B inside the register, and then you
will put back the result, so it's reversed, instead of B it's OB, to store back the result in M4, and of course, here you have to think, oh, I've done an addition, so maybe there is one carry overflow, so you can see here that we have provisioned one byte more to be able to store the result back.
So you can see all the mental gymnastics you have to do to be able to program with that kind of range in the registers.
So it means that when you are coding, you have to use that kind of sheet, you can see of course the mnemonic, you see here the translation, where you have to think about those ranges, and you have then to facilitate that for the range, you have to allocate your range and reason about your range also on this sheet.
So you can see here the problem is computing that formula, and then you will just perform the different calculation, multiplication, shift to have the right power, and then divide by the square root of 3.
Ok, quickly, this is the full instruction set, as you can see it's not very regular, well, a natural thing is that no operation is still 0, it was already 0.
You have operations to different kinds of jumps, there was an inner flag to remember how to jump, you have different memory transfers, I will not go into details, of course to set memory to 0 or to load a value, to make the transfer between different kinds
of registers, there is a logical AND, I didn't find any logical OR, I don't know if there was one, different comparison, and then of course, the most important one, from A, B, C, D, E, F, the addition and arithmetic operation,
and you can see there are two flavors for multiplication and division, because there was one that was called reduced multiplication and reduced division, that was faster but will not operate on double registers, because the result of multiplication can of course take twice as much as space.
Ok, so this is the code card, so it summarizes the whole instruction set, and it reflects the complexity of its organization,
you can point to just three things, first it's called ordinator, so in French the name was ordinator, but the name ordinator was coined one year after,
for IBM machine, so it didn't exist yet, so you have to think about all that, you can see here the different arithmetic operations, A to F, so 12, 13, 14, 15, and you can see the order is not
always in the order, the 7 is presented higher, because it's just the shifter AND operation, and the 2 is not represented because it was an extension for the drum.
Ok, let me go quickly, so about existing emulator, so this one was written in 1995 by Vincent Gauguin, sorry, in x86 assembly code,
and it still runs, well thanks for the emulator, because you need those boxes to run it, we don't have the source code, you can just see there it's just emulating everything, so it's quite good, complete, and you can see there it's just loading some information,
so it's just loading 09427 in the memory tree register, and then you can, well, there is a drum emulated, and then you have a number of programs on the drum you can try.
A more recent one is available online, so this one is very interesting because it's very well documented, and you can even play with the panel, there is a full console where you can step in, and actually it was one of the sources of inspiration of our work, because that one was in Java 6,
and now it's in Java, so we kind of transposed and first studied that code. There is also an extension, a visualization, 3D visualization, which is funny because you can explore inside the machine,
you can see here the connection, and there are big cables to connect the machine with the tabulating. About the emulation structure, of course, what we have modeled is all the components, so you have the machine, you have the different kind of memories, panel memories, it's just the register,
different series groups, and then a special one, which is the panel, which is actually, as you can see, memory mapped to one of the series, and of course you can also have connected machines and the drum. Then of course the whole instruction set, you can see there the modeling, the way the instructions are structured,
depending on the kind, if it's for drum transfers, all the arithmetic operations have some common parts, so we have some errors there, and of course there is some execution management and tests,
and you can see the code there on GitHub. And what's interesting from also the emulation point of view, of course, all operations will have to specify the different information, so for the addition, this is an inner operation just to show you how it's implemented,
so of course you have to specify the range where you are performing the addition, and this is quite standard implementation where you just loop over the different bits,
and then you propagate the carry. What is interesting is just that you have the base, so that code would work both for the binary and for the decimal implementation, actually,
the variant of the machine, so this is trying to mimic the whole operations. Another one, very simple, is just to use the Java operations, for example for the substraction,
we just translate everything in decimal, perform the substraction, and then store the result. There is only one thing that we must be careful, is that we have to use long in Java, because those 12 numbers are more than 32 bits in Java.
We skip the division, so the current implementation, while we have our prototype just in Java, so we are just using here the Eclipse as an environment and running the test,
so this is just a test, we have a small interface which is not yet finished, and you can see here a quick code that just shows the Fibonacci suite, and you can see the result here, I will not go into detail, but you can see there is actually a loop,
so there is a jump for 10 iterations, and then you have the different numbers, you can see the number after a few iterations, you have 13, which are being computed, so it's working, and now I will finish, so what did we learn?
So it was quite funny and strange to look at that machine, it's not so complex to code, but there are many many implementation details you see about those range manipulations, and we still have a lot to explore,
for example all the floating points, improving the user interface, and of course we are at the start, so we would like to study what was used as code at that time. So in summary it was very rewarding, from the technical but also from the historical
and cultural point of view. Thank you, and if you have questions you are welcome, there are some references there about all the guys who have worked on that machine.
Does the simulator simulate the core memory and the reading that was required to rewrite it again once you read it? So the question is about the core memory, simulating the reading of the memory,
so it's a good question because I don't know how to call this a simulator or an emulator, but the component, well the machine is quite also, what we are emulating is kind of an abstraction of the machine,
I would say. So one limitation is that we don't really know the physics of the reading, so we are assuming that we can read reliably the information and that we don't have any timing issues
with the things that are bothering you, but of course we don't have a working machine to compare with, so we can only compare with expected results or with what the older emulator is delivering.
Actually the older emulator had a problem, there was a mistake discovered, so it was corrected by the guy who was still maintaining it somehow. So the point would be really to be able to study the electronic circuitry if we would like to go to that level,
but we don't have one, sorry. There's a question from the chat room, is there a compiler for gamma-3? A compiler, well, is there a compiler for the gamma-3?
So you could see, well assembly language was invented two years before, I think, by Booth, and the assembly was done manually at that time. So at that time the answer is no, there were no compilers, but no, today actually the guys from Akonit have developed a compiler from a language that looks like Java, I think,
so you can compile from that pseudocode language into the gamma-3.
I didn't try it, but I did. What's the program? The question is, what's the program?
The program that I showed was coded, was executed from the panel, so the panel is just a way to specify the content of the memory, it's the same just with wire, but the emulator supports the DRAM.
The original machine, could it load instructions? Yeah, it could load instructions, so the DRAM could contain instructions, or it could contain data.
So the question is about the cycle count of the different instructions. We have timing about the addition, subtraction, and different kinds of multiplication, so that's available, and that's a good point because the emulator is not taking that into account,
so it would probably be a good point to try to reproduce that behavior. Thank you very much.
Thank you.