A free toolchain for 0.01 € - computers
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 490 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/47439 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
MicrocontrollerComputerComputerMicrocontrollerCompilerBlock (periodic table)Computer architectureGame controllerComputer hardwarePhysical lawPoint (geometry)
00:42
Computer hardwareFree moduleCompilerTable (information)Content (media)ArchitectureSimilarity (geometry)Non-volatile memoryFlash memoryProgrammable read-only memoryAnalog-to-digital converterPeripheralThread (computing)Clique-widthMicroprocessorBarrelled spacePointer (computer programming)Execution unitCore dumpFlagMultiplication signArithmetic meanLine (geometry)Cycle (graph theory)Physical lawService-oriented architectureRoundness (object)Inheritance (object-oriented programming)Game controllerSystem callProcess (computing)Term (mathematics)SummierbarkeitDifferent (Kate Ryan album)Computer architectureKey (cryptography)Internet forumSoftwarePerturbation theoryBitSpeech synthesisUniform resource locatorSource codeCore dumpCausalityForm (programming)Graph coloringMultiplicationBounded variationGroup actionInformation privacyAtomic numberForcing (mathematics)Block (periodic table)SpacetimeExecution unitMoment (mathematics)40 (number)CASE <Informatik>LoginBoiling pointValidity (statistics)SynchronizationMultilaterationPerspective (visual)NP-hardValue-added networkSimulationWater vaporDisk read-and-write headRevision controlSet (mathematics)Product (business)Non-volatile memoryComputer programMicrocontrollerCompilerAddress spaceComputer hardwareFreewareComputer programmingClique-widthWordParallel portLogicMicroprocessorDirection (geometry)1 (number)Thread (computing)Data storage deviceBarrelled spaceStructural loadIntegrated development environmentComputer fileHoneywell-HoldingSlide ruleComputer animation
07:37
Non-volatile memoryComputer programThread (computing)Computer hardwarePeripheralMereologyMultiplication signAddress spaceMetreSpacetimeDigitizingSquare numberPoint (geometry)Logic gateDot productBitPhysical lawSystem callSampling (statistics)PlanningComputer animation
08:57
Integrated development environmentSoftwareComputer programDigital electronicsEmulatorFree moduleLinker (computing)CompilerAssembly languageProgrammer (hardware)WhiteboardSimulationContent (media)Table (information)Computer hardwareUsabilityPower (physics)Binary codePower (physics)WhiteboardIntegrated development environmentData typeComputer programmingComputer programCompilerPersonal identification numberCASE <Informatik>SoftwareIntegerDifferent (Kate Ryan album)Pointer (computer programming)Non-volatile memoryFlash memoryProgrammer (hardware)MicrocontrollerComputer hardwareType theorySampling (statistics)CuboidBitDigital electronicsSoftware developerCore dumpOperations support systemEmulatorWindowLinker (computing)Computer fileDemo (music)Self-organizationPoint (geometry)WebsiteSimulationMereologyTerm (mathematics)Hand fanVideo gameCoefficient of determinationInsertion lossSystem callService (economics)AreaForm (programming)Computer animation
12:33
CompilerImplementationAssembly languageLinker (computing)SimulationSystem programmingArchitectureAsynchronous Transfer ModeResource allocationOpen setFree moduleFunction (mathematics)Local ringVariable (mathematics)Default (computer science)Stack (abstract data type)Maß <Mathematik>Translation (relic)Machine codeRun time (program lifecycle phase)AdditionValue-added networkProduct (business)Physical lawCausalityPhysical systemTerm (mathematics)Link (knot theory)Lie groupMetropolitan area networkImplementationSoftware developerComputer architectureException handlingSimilarity (geometry)Asynchronous Transfer ModeBitLevel (video gaming)Software testing1 (number)Mathematical optimizationMultiplication signGame controllerEstimatorState of matterSource codeProcess (computing)FreewareSet (mathematics)AreaGame theoryInsertion lossVotingExecution unitComputerCompilerGoodness of fitCycle (graph theory)Perfect groupForcing (mathematics)Uniform resource locatorDefault (computer science)Group actionFunctional (mathematics)Different (Kate Ryan album)Staff (military)FamilyLine (geometry)Standard deviationComputer configurationData storage deviceData structureMachine visionSubsetSpacetimeKeyboard shortcutComputer programmingMicrocontrollerCompilerNon-volatile memoryVariable (mathematics)SimulationLibrary (computing)Graph (mathematics)Reduced instruction set computingRecursionMicroprocessorMereologyAddress spaceAdditionComputer fileKernel (computing)Series (mathematics)Computer animation
16:53
Machine codeMultiplication signService (economics)Functional (mathematics)Point (geometry)CASE <Informatik>Revision controlBenchmarkGraph (mathematics)File viewerImplementationMachine codeData storage deviceTheory of relativityComputer architectureStandard deviationCompilerPointer (computer programming)Stack (abstract data type)Structural loadAddress spaceBranch (computer science)Computer animationDiagram
18:27
Resource allocationMathematical optimizationPolynomialTheoryFunction (mathematics)CompilerMachine codeSoftware testingLinear regressionSimulationArchitectureAsynchronous Transfer ModeFront and back endsSoftware developerStandard deviationUsabilityIntegrated development environmentDisintegrationProgrammer (hardware)InformationComputer architectureLinear regressionCompilerSoftware testingComputer programmingMathematical optimizationStandard deviationPhysical systemRun time (program lifecycle phase)Control flowBitNP-hardSoftware bugNumberMultiplication signDynamical systemINTEGRALFront and back endsBlock (periodic table)InformationSuite (music)ResultantQuicksortFunctional (mathematics)Computer programResource allocationStability theoryMachine codeCombinational logicNon-volatile memoryProgrammer (hardware)ChainFormal languagePolynomialPeripheraloutputMereologyUniform resource locatorContent (media)Impulse responseCASE <Informatik>CuboidDataflowRoyal NavyTerm (mathematics)Group actionPerfect groupLabour Party (Malta)Hand fanGame controllerGreatest elementAmicable numbersHelmholtz decompositionPoint (geometry)Product (business)Power (physics)ECosFood energyForcing (mathematics)CausalityLevel (video gaming)Arithmetic meanSoftware maintenanceComputer animation
24:28
Software developerStandard deviationUsabilityIntegrated development environmentDisintegrationProgrammer (hardware)InformationComputer architectureBitIdentity managementArithmetic meanPower (physics)Bus (computing)Bit rateGroup actionStructural loadProduct (business)Computer fileDistribution (mathematics)Personal identification numberMultiplication signMathematical singularityNumbering schemeCASE <Informatik>WebsiteTelecommunicationArmCalculationRead-only memoryPhysical systemEstimatorComputer hardwareMereologyCombinational logicDirected graphMicrocontrollerCycle (graph theory)Classical physicsEEPROMBootingWordCountingClosed setComputer programOscillationProgrammer (hardware)1 (number)Radio-frequency identificationComputer animation
30:29
Software developerStandard deviationUsabilityDisintegrationIntegrated development environmentProgrammer (hardware)InformationPoint cloudOpen sourceComputer animation
Transcript: English(auto-generated)
00:05
One-cent computers. This started kind of about a year ago when there was an EAV block entry about three-cent computers or microcontrollers.
00:22
And at some point, I had a look at the hardware, and it seemed like it would be a nice target architecture for the small device C compiler. And today, we have a free toolchain for these microcontrollers. Not perfect for all of them yet, but we'll see where we are.
00:43
OK, so I'll start by talking a little bit about the background. What are these microcontrollers that are relatively cheap? I'll talk a little bit about free hardware to support them, basically the device for programming them.
01:01
And then about the tool support by the C compiler. OK, those microcontrollers are made by a Taiwanese company called Padok. There's also a reseller that rebrands them
01:22
under the name of Prolob. Since Prolob sometimes makes like semi-custom versions, there's something available from Prolob that one can't get directly at Padok. I think one microcontroller with three quarters of a kilowatt of program memory.
01:42
They're relatively cheap, so the very cheapest is indeed available for one cent if bought in quantities of 10. However, those very cheap ones, their program memory is PROM, so I prefer to have them reusable.
02:02
So personally, I use expensive versions which also have bigger memory. The chips on here cost about $0.04 to $0.07. But the toolchain can also target the very cheapest ones. They are quite low-power devices. The architecture is accumulator-based,
02:22
meaning you don't have, quite unlike this, you don't have a lot of registers, but you have one register, and you typically have accumulator memory instructions or memory accumulator instructions. So not load store architecture, but direct use of memory operands. The architecture is relatively nice if you look at such small devices, yeah?
02:43
I mean, there was for a long time around the microchip pick, which have the problem that the smallest devices have a hardware stack, so you're limited in your call depths. And then there's plenty of Chinese-made 4-bit microcontrollers that are very hard to target with something like a C compiler.
03:03
So considering that this thing is cheap and low-power and small and everything, it has a relatively nice architecture. It's kind of comparable to the Intel 8051 in terms of how nice or not nice it is. The devices have RAM, between the smallest one
03:23
have 60 bytes, but there's versions up to 256 bytes. So one subarchitecture would allow for up to 512 bytes because it has nine address bits, but no existing devices are known.
03:40
And there's actually different subarchitectures will come to it soon. The smallest has only six address bits for the RAM. We have program memory, that's between half and half a kilowatt, and four kilowatts of program memory, depending on the subarchitecture. The width of a word is between 13 and 16 bits.
04:01
There's very few peripherals, typically just timers, comparators, ADC, something for pulse width modulation and watchdog. However, we have something, well, it's not full multi-core, even though I'll use the term core on the rest of the slides
04:21
because the manufacturer does, but it's kind of hardware threads, a bit like hyper-threading or something like that, which is convenient because you can emulate a peripheral in software on kind of a separate, let's call it core now, and have it respond fast to IO. Okay, so let's get to the subarchitectures.
04:44
I usually call them PDK13 to PDK16 by the width of the words and program memory. They have internal names that are found in files from the non-free IDE from the manufacturer. Again, 13 to 16 bits.
05:01
So for every bit width, it has a subarchitecture and actually assisting devices. The address bits for the program memory also go up by one, and the address bits for the data. For IO, mostly so, except for PDK16 having less address bits.
05:20
As far as I know, PDK16 actually seems to be the oldest of these subarchitectures, but the other ones added later, so. And only for the PDK14, PDK16, they're actually devices with multiple hardware threads. And if you want the full eight threads, it's only the PDK16 subarchitecture.
05:43
There's still a little bit more variation within these subarchitectures. You have some PDK16 devices that have a multiplication instruction, others don't, and so on. But looking at the big picture, these four subarchitectures are there, and then sometimes there are a few additional instructions or not.
06:05
Okay, how is the hardware multithreading implemented? Well, this is a so-called barrel processor which you may know from the Honeywell H800, from the early 1960s, I think.
06:20
I think there's also the XCore microcontroller or something that uses something similar, but I don't know of any others. Apart from the PADOC, basically the idea is you have a set of registers for every hardware thread, and then cycle per cycle, you rotate through your arithmetic and logical unit,
06:44
and every cycle, this arithmetic, logical unit processes an instruction from a different core. Yeah, we call these core processing units, whatever. Hardware threads is more accurate, I'll use core because it's short, but manufacturer documentation uses different terms.
07:04
There's unfortunately a lack of instruction support for this parallelism. Usually you have multi-coursing and expect stuff that makes it easy to implement locks, to implement atomics and so on. Unfortunately, that's not in this architecture.
07:20
Yeah, it's very complicated to emulate lockless atomics or something, but we'll get to that later. So synchronization between the multiple hardware threads is a bit problematic, at least from a C perspective. Okay, this is how this thing looks.
07:40
Well, I've boiled it in colophony to take it apart and put it under a microscope. What we see here is PMC234 is one of the big devices. Yeah, it's eight hardware threads. It has lots of peripherals by PADOC standards. Still, what we see in the middle, the sea of gates,
08:02
all the digital logics, of course. The peripherals don't actually make up that much space. Basically, I have a look at the program memory and include the address decoder. I already have about the same amount of space in just the program memory. And then, of course, there's the IO parts
08:21
and here these diodes for ESD protection. Yeah, I think the cheaper, really cheapest ones, the PMS158 or the one-cent things have a dye of, I think, a quarter, a square millimeter
08:42
or something like that. So this here is a little bit bigger. It's like about one millimeter times 1.2 or something like that. So it's easier to pick up with pliers and not to lose when you're decapping a chip. Okay, so before I started the work,
09:01
of course, I ran on-screen tools. Available, so the manufacturer provides a thing called Mini-C. It's an IDE with compiler sampler and everything in one thing. It's one binary. It's essentially a sampler with a bit of C-like syntactic sugar on top. And well, there's a program writer in circuit emulator.
09:21
So yeah, it's far from real C. I mean, they don't even have different data types for pointers and integers. It's just the 8-bit data type and one 16-bit data type and that's it. And your integer type is the same as your pointer type and so on. Yeah. Okay, so let's get to the free tools.
09:41
That will be the small device C compiler, which you might already know if you're into the 8-bit stuff with the sample linker simulator. So backend for that, the EZPDK programmer to write it. So programmer from the manufacturer, the big blue box.
10:00
And some very basic development boards. Okay, now let's get into the details of that. So someone reverse-engineered the programming protocol, which was quite weird because it lost lots of different voltages. A little bit unexpected.
10:22
And created this EZPDK programmer. So that's all on GitHub, including the files for 3D printing it. That works under various OSs. Well, I've only used it under Linux, but I've seen people use it on Mac OS and I think Windows. Yep, and it so far fully supports
10:44
six of those small devices, the PMS-15A, which is that thing that's actually available at one cent. And the two flash devices, the PFS-145 and 173, which are nice because they have flash instead of OTP memory.
11:00
For 12 more devices, there's already read-only support. And yeah, as far as I know, the developer of these devices is happy to add support for more when he gets his hands on the hardware. Padork has announced lots of interesting microcontrollers that are not really available as distributors yet,
11:22
especially in the realm of two and four core devices with flash support. Currently, all multi-core things are OTP. So because of development boards, I designed these very, very basic ones.
11:42
It's just a minimal board. There's five LEDs of which four are connected to IO. The fifth one is just there to tell if power is coming from USB or elsewhere. You can supply power again via, I just want it off, via USB connector. That's really only there for power.
12:01
Pins are USB. There's some diodes to protect the USB in case something goes wrong, and that's it. There's also demo programs to be found on GitHub for basic stuff like blinking the LEDs, counting it, or hello world via a software-emulated UART on the pins.
12:21
So nothing that fancy, but nice to get started. Now the big thing on the software support side is a small device, C compiler. Well, a quick introduction on SDCC for those who don't know it.
12:40
It's a C compiler. It tries to be standard compliant. It has the usual switches to select the mode, be it C90, 99, 11, or the upcoming C2X standard, which is the latter mode, of course, is still quite incomplete because the standard is still quite incomplete. But we are trying to implement stuff
13:03
as it gets decided by the standards committee. It's usually used as a freestanding implementation, but it can actually be used kind of part of the hosted implementation. For those not familiar with this terminology, the C standard basically defines two subsets of the standard
13:23
and the freestanding has lower requirements, for example, on the library. So when you're doing bare-metal programming, you still have a freestanding implementation, hosted implementation. You would have advanced stuff such as a file system, but there's actually operating systems like FuzzX.
13:42
You might have known it. Alan Cox, former Linux kernel developer, is making a Unix system for Z80s, and they then use SDCC as a cross compiler, but then the FuzzX libraries are there. It's then a hosted implementation. The supporting tools, a sampler, linker, simulator that we need.
14:01
It works on many host systems, such as GNU Linux, Windows, Mac OS, Herat, OpenBSD, FreeBSD. It probably works on more, but these are the ones on which I know it causes testing done. It targets plenty of 8-bit architectures, such as the 8051, and similar,
14:21
the Z80 and lot of Z80 related architectures. The LR35900 for those who are not familiar with the terms of processor and the Game Boy. HC08 was once a popular microcontroller series. STM8 is still a very popular
14:42
8-bit microcontroller architecture. Then those three sub-architectures of the PADOC, so everything except for PDK16, and then there's in development, not yet stable support for the microchip PIC14 and 16, and it has been in that state for a long, long time,
15:02
but it's still kind of usable, even for the microchip PICs. It does have some unusual optimizations that you don't find in other compilers, in particular in register allocation, because for these architectures, you're in a bit different situation here.
15:21
With GCC or LLVM, you're usually targeting something like RISC. You have lots of registers, but here you have usually very few registers, so you have to, and if you have multiple registers, they're not equal, and RISC doesn't matter what goes into which register. Chitin-style graph coloring register locator is perfect,
15:41
but here you really have to take into account, okay, this instruction is only available on this register, that instruction only on that one, and this instruction might be available on both registers, but it takes one cycle more if I have this register as operand. That makes it a bit different. Yeah, it's on source fault.
16:03
The PADOC backend, as I said, it supports three of the sub-architectures. Functions are by default non-reentrant. The problem is that stack access is quite inefficient in these PADOC architectures, so we, by default, read everything as if it was static, so you can't have recursion unless you use the keywords
16:22
to say, I want this function reentrant, or use the command line option to say, I want the whole compilation unit, and just to illustrate how much that costs, a typical 16-bit addition of two local variables that have been spilled to memory will take 40 cycles if it's on the stack,
16:43
and just six cycles if not. So, reentrancy has a cost. We have special keywords to access the IO address space. Now, just to see how much this architecture would benefit if it had better instructions to access the stack, I created a branch of SDCC
17:03
and compiled some benchmarks with a hypothetical variant, so we see code size could go down. These are, in these cases, only the instant functions that really need to be reentranced are marked as reentrant, however, if you want to compile everything as reentrant, as you want it usually in a standard compliancy
17:25
implementation, you'd get even bigger savings. So, yes, the architecture could be far, far nicer to see if they made just a little, a few small changes, but, yep, for now we have to work with what is there, so I'll try to contact them sometime
17:41
and ask them if they can't add these things. There's all these different things. SPAT is just an instruction that adds an immediate offset to the stack pointer. IDXSP would mean you have a load and store instruction with stack pointer relative addressing. BLUE is how much you get code size and code size if you have both,
18:01
and SPREL would mean you have stack pointer relative addressing in all instructions, and its code size is, of course, relative to the current architecture, so we would, for some cases, like this drystone benchmark, fully reentrant, we could get code size down by 60%
18:20
if we would have the best version of stack pointer relative addressing. Okay, I've already mentioned we have an unusual register locator. It's optimal, and it works in polynomial time. In general, register allocation would be an NP-hard problem.
18:44
However, that only happens if you have either the number of registers as part of the input, but for any given architecture, it's fixed, so we can assume that to be a constant, or you allow unbounded three bits of the control flow graph, and I've proven that to do that,
19:01
you need a huge number of go-to labels per function in C, so unless you make excessive use of go-to labels per function in C, you have bounded three bits, and the number of registers is fixed, of course, so in that case, we can do optimal register allocation in polynomial time. We calculate the 3D composition of the control flow graph,
19:22
and then do the standard dynamic programming bottom-up, and that is still slow, because the runtime is exponential in the number of registers. Not so big deal for PADOCs, so we use one sort of register, but for the Z80, we have eight or nine, so it's already a lot,
19:40
but still, it's worth for those small embedded systems where your program is small anyway, because it needs to fit onto the small device, and you really care about optimization. There's a compilation speed-quality trade-off. Otherwise, the partial results are too many at a certain point, while the sticks kick in, and we are no longer provably optimal.
20:03
Okay, so you might wonder this SDCC compiler, how reliable is it? I mean, it's not a big thing like GCC or LLVM. However, SDCC does regression testing. We have nightly snapshots that are generated for various architectures, about 12,000 tests.
20:21
These are mostly from fixed bugs from our side, and we also took most of the GCC test suite into our tests. We do it for all our stable target architectures, and a few hosts OS, including one called Windows, but it's actually cost-compiled under Linux, and then the test is run under Vine,
20:41
and we do that for four different host architectures, so not every host OS, host architecture combination is done. But the general thing is, if it's standard-compliant C code, then it works under GCCs, and most likely, it will also work with SDCC,
21:02
given, of course, the memory of science and whatever of the target architecture. Okay, for those who don't like C, there's this experimental LLVM plus SDCC tool chain that I haven't done any work on recently. Basically, the LLVM C front and back end are used to compile input to LLVM into C code,
21:22
while the LLVM C back end has been thrown out of LLVM a long time ago. There's still people maintaining it on GitHub. It can be mixed with pure C code compiled with SDCC. The idea was to allow languages other than C and to use LLVM's high-level optimization, but this is all quite experimental stuff.
21:41
It kind of works. I managed two years ago, when I tried to compile a tristone using this and run it and got a little bit optimization out of it, but that definitely needs more work, and it's actually not the only thing that needs more work. There's still quite a bit to do.
22:00
I mean, SDCC needs more developers, in particular, people who have been working on the 1851 back end these days, not as active as they used to be. There's, of course, bugs to fix, improvements and standard compliance. There's always more optimizations to do. Debug info is not that nice yet. I mean, for some back ends, we have ELF-DWAF output,
22:23
but there's still some, it's not perfect yet. It tends to work for a lot of cases, but sometimes the debug info causes problems, and this LLVM plus SDCC thing, as I said, is experimental still. New work needs to be done to make it usable. IDE integration can be improved. I mean, code blocks has kind of support for SDCC,
22:43
and there used to be this Eclipse plugin, but that is unmaintained for a while, and it doesn't seem to work well for everyone anymore, and, of course, then the programmer could get more support for more devices. So, questions?
23:12
Yeah. Why working with a PADEC chip? Have you encountered some hardware bugs and so on? How reliable is it?
23:22
I have not encountered any actual hardware bug. I've encountered bugs in the documentation, though. Ah, yes. The question was if I've encountered hardware bugs while working with PADEC chips. Of course, I mean, I've mostly done the programming stuff, so if there would be a bug in a peripheral,
23:43
I wouldn't know. On the other hand, they have such simple and few peripherals that there's not a lot of bugs to hide there easily.
24:09
No, there was an attempt in a long time ago. Someone wanted to make an AVR backend in SDCC, but they never finished it. A few years ago or so, there was some talk of reviving it,
24:23
because GCC was about to intense to top the AVR backend. I think it's un-maintained. But no, there's no architecture which is targeted both by SDCC and GCC or SDCC and LLVM. What you could try to compare is to use IAR,
24:41
because IAR targets both architectures, like ARM targeted by GCC and LLVM, and architectures targeted by SDCC. Is it possible to write a bootloader for those MCUs? If it's possible to write a bootloader for those things.
25:04
Well, I mean, what's a bootloader? It's just some program that runs at the very beginning. It needs to fit into the hardware. Do you know?
25:21
No, as far as I know, the customers can itself write its flash, as far as I know. No. And it's run from RAM? And you can't run it from RAM easily either. I mean, there's not a lot of RAM anyway. I mean, 60 bytes up to 256. I'm not sure if there's might be some,
25:41
ah, wait, there's upcoming variants are announced and already on the website with data sheets, I think, but they are not available at the distributors that have quite a bit of EEPROM. I'm not sure if you can, I think you can write the EEPROM from the thing, and you might be able to run a program from there, but I'm not sure.
26:00
I think the documentation isn't complete yet either. But those might be the PGC, ah, the numbering scheme, and they tend to have four cores, and yeah. And I haven't seen the hardware yet. Yeah, in the back first?
26:21
Excuse me? How hard is it to get a programmer for it? Well, you can buy the big, bulky thing from the manufacturer for 100 euros or something, and then have it only supported by their hardware. Or you can look at this thing, which is relatively simple, and the files are at GitHub. But I guess, I mean, there's someone,
26:41
there's been two people now who intend to manufacture something like that. But if you want to quickly just have it, just do it yourself. All the design files on GitHub. So the question was, what packages and pin counts are available for these PADOC things?
27:01
I'm not exactly sure. I know there's eight pin devices. I'm not sure if there's six pin devices. I'm nothing under six pins. And it goes up to 20, 24, something like that. Packages, a lot of packages are mentioned in the data sheets, but so far,
27:22
everything I've seen is these SOIC ones. So that's what's available at typical distributors. But in the data sheets, they even mention DIP variants, and then BGA, and whatever. Maybe, I mean, I guess a lot of people
27:40
who buy microcontrollers for one cent buy like billions of them, and then can get their custom packaging anyway. I mean, PADOC sometimes apparently even makes custom instruction for this customer. Do you see any usage of the PADOC in the maker scene, or?
28:01
If there's use in the maker scene, well, they're cheap, they're low power. So, I mean, this is cheaper than a 74 LS whatever thing. So you can, I guess there's a lot of uses, but I wouldn't pinpoint something. I mean, it's still a bit too big for smart dust, and then you'd need RFID communication
28:22
or something like that, which I haven't tried yet. I mean, the tips themselves don't have support, but it could probably be done by an external thing. Or, of course, the classic low power thing combine this with an ESP8266 that sleeps all the time, this here is active all the time, and when something interesting is to be reported, wake up the other thing that takes power.
28:45
And you're talking about low power, so how low power can it get? Well, to be honest, I don't remember by my heart, so you'd have to look up the data sheet, but when I first went into this, I noticed that they're actually quite low.
29:04
I think they were lower than others. I'm not sure, but as far as I remember, you got, in some cases, lower than, let's say, an STM8L, so the low power end of S. But I'm not sure about that aspect. I'd have to look at the data sheets again.
29:21
I just remember a year ago when I looked at all the hardware before I really did dive into doing stuff. Something close to this question, would it be possible to kind of drive them without battery, like, I don't know, EM or whatever, or sunlight? Well, sunlight, yes, you could use a solar cell
29:42
and a capacitor, stuff like that, that would work. Okay. Like a solar cell like you have in these calculators or something like that. Okay. Ah, how fast, how slow? You can go really low, and I think you can go
30:01
below a kilohertz if you really want to. How fast? Well, the oscillator's at 16 megahertz, and the system clock would be at eight megahertz, and nearly all instructions are single cycle. Except for those that access 16-bit words in memory, those are two cycles.
30:21
Okay, that's it. Fortunately, no hands up, and time's up as well.