Monday, December 8, 2025

A simple Ti-86 assembly language program typed on the calculator

First of all, if you manage to type something in wrong it can hard lock the calculator, and you may have to pop the batteries, and lose everything on the calculator. So don't try typing assembly language programs unless losing everything on the calculator is acceptable.

Some basic facts about the Ti-86. It uses the Z-80 processor. Because it is difficult to write relocatable assembly language programs, the Ti-86 always relocates them to 0D748h. Two useful routines are _clrScrn at 4A82h and _puts at 4A37h

Google Gemini wrote the following assembly code and compiled it (note that Google Gemini did make a mistake once, which hard locked the calculator, so again, don't do this if there is anything on the calculator you care about losing).

CD 82 4A	; CALL $4A82
21 52 D7	; LD HL, $D752
CD 37 4A	; CALL $4A37
C9	        ; RET
48 45 4C 4C 4F 00	; DB "HELLO\0"

Next, we need to get it into the calculator, edit a program:

PROGRAM:ASMHELLO
AsmPrgm
CD824A
2152D7
CD374A
C9
48454C4C4F00

Then run this with: Asm(ASMHELLO)

You should get HELLO Done printed on the screen. Note that this was tested on a calculator with a rom version of 1.6

The start address and the calling routine numbers were found at https://github.com/abbrev/ti-86-asm/blob/db010670f8aad1509633b01d39c0c563e4ee7fb0/stranded/ti86asm.inc

Some notes Gemini wrote for readers of this post:

🧠 The Z80 Architecture: Why We Use HL

The Zilog Z80 processor, a 1970s workhorse, forms the brain of the TI-86. To understand the program, we need to look at its core components, especially its registers.

Our program uses the HL register pair. The 21 opcode for LD HL, nnnn is specifically designed to load a 16-bit memory address. This is critical because the _puts system routine (4A37h) is hardwired by the TI-86 operating system to look only at the HL register to find the starting address of the string it needs to print.


⚙️ Deconstructing the Hex: What the CPU Sees

Typing in hex means we are speaking the Z80's native machine language. Here is a breakdown of the "magic numbers" in our code:

  • CD 82 4A (CALL $4A82): CD is the opcode for CALL. The address 4A82h is written in the Z80’s little-endian format (low byte first) as 82 4A. This is the absolute jump to the OS’s _clrScrn routine.
  • 21 52 D7 (LD HL, $D752): 21 is the opcode for "Load 16-bit value into HL." The address D752h is where our string data starts, calculated from the program's load address 0D748h.
  • CD 37 4A (CALL $4A37): Another CALL, this one jumps to the _puts routine at 4A37h to draw characters to the display buffer, starting from the address currently stored in HL.
  • C9 (RET): This is the most crucial byte for stability. C9 is the opcode for Return, which safely hands execution control back to the Asm( shell, preventing a calculator lockup.
  • 48 45...00: These are the ASCII codes for 'H', 'E', 'L', 'L', 'O', and finally, the null terminator (00), which tells _puts when to stop printing.

🛑 The Hex Trap: Why Relocatable Code Fails Here

Our program relies on absolute addressing, which is not ideal but necessary. We tried to write relocatable code using the Z80's "Pop the PC" technique (involving CALL followed by POP HL), but that method has a hidden requirement:

It needs a Relocating Loader (a feature of advanced assemblers) to fix the address inside the CALL instruction at runtime. Since the TI-86's hex editor is primitive, it loads the CALL operand as an unfixed absolute address, which would always jump to the wrong place ($0009 in our test) and crash the calculator.

Therefore, we are forced to rely on the fact that the shell always loads programs at 0D748h to calculate the string pointer. This makes the code fast and functional, but brittle—if the TI OS ever changes that load address, the program will break.

Friday, February 28, 2025

Use computers as powerful as in 1985 or AI controls humans or ?

A way to prevent AGI from taking over or destroying humanity is to strictly limit the computing power used on unknown AI algorithms. My back of the envelope calculations[1] show that restricting the hardware to 64 KiB of total storage is definitely sufficient to prevent an independence gaining AGI, and restricting to 2 MiB of storage is very likely to be sufficient to prevent an indepence gaining AGI. State of the art AI on the other hand tends to be using at least 1 GiB of RAM or much more and processing power in the teraflops range or more. As for an upper limit before we get AGI, whole brain emulation provides one, but that is on the order of 1 exoflops and 1 petabyte, so we do not have a precise idea of where the limits for AGI are. Also, we don't have a way to make sure that AI software is aligned with human goals and ethics.[2]

So here are options:

  1. Only use really weak computers (midrange 1985 computers like a Mac 512K or an Atari 520ST would almost certainly be safe)
  2. Just let AGI take control.
  3. Hope that AGI really requires very powerful computers and ban them but allow less powerful computers that are well above what we are sure cannot be an AGI.
  4. Hope there is outside intervention that prevents dangerous AGI (space aliens, divine intervention, dark lords of the matrix, etc.)
  5. ???

So what should humanity do? I talked to a non-computer scientist about this, and his answer was that restricting us to circa 1985 power of computers was the best choice, which actually surprised me a little. Letting AGI take control can result in extinction, or the AGI imposing rules that we don't like.[3]

The problem is that we are metaphorically experimenting with 15 kiloVolt AC when we really should be experimenting with 5 volt DC because we have a very weak understanding of AI safety.

There can be different safe paths, but one that I am fairly sure would prevent independence gaining AGI would be: first of all, do research on what the limits are (check my back of the envelope calculations). Next, start working towards shutting down integrated circuit fabrication that is too advanced (10 μm feature size would at least make computers above the AGI limit expensive. 1 μm feature size would also significantly limit computers.) If consumer computers were on the order of 512 KiB RAM with 10 MFLOP/S processing power with one or two 720 KiB floppy drives for storage, stand alone, these could not be used for an independence gaining AGI.[4] (I think one 1200 bit/s or four 300 bit/s modem could probably be allowed, and a Mini CD-R drive could probably be allowed, but that would take new research.) A hard limit of 1 GiB of RAM and 1 GFLOP/S for computers that are restricted to non-AI research could probably be allowed for people willing to follow licensing restrictions of what can be run on the computer. 1 GiB RAM and 1 GFLOP/S compute would seriously restrict most current AI algorithms, and so would be hard to accidentally create an AGI on.

It seems to me that any path that is not at risk of accidentially creating an independnce gaining AGI will be very hard to achieve. Convincing people that AGI is a problem will be hard, figuring out what the limits are and convincing people of that will be hard, and since it seems like the limits are below currently existing computers, convincing people to give up running AI programs and other programs that might accidently turn into AI programs will be hard.

I don't know what humanity should do. As for me personally, if I had the choice between my current 1.6 GHz 4 core CPU with 24 GB of RAM computer that I am typing on, versus living in a world where we had eliminated existential risk from things like uncontrolled AGI and nuclear bombs, I would gladly trade my computer in for a 512 KB, 8 MHz computer with a floppy drive and a Mini CD-R and an modem level network connection if that is what we all need to do. I am curious what others think.

These are my own opinions and not those of my employer. This document may be distributed verbatim in any media.

[1]: https://www.researchgate.net/publication/388398902_Memory_and_FLOPS_Hardware_Limits_to_Prevent_AGI and for an earlier draft https://www.lesswrong.com/posts/9kvpdK9BLSMxGnxjk/thoughts-on-hardware-limits-to-prevent-agi and if you see any mistakes I made or have questions please tell me.

[2]: https://intelligence.org/2023/04/21/the-basic-reasons-i-expect-agi-ruin/ and https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

[3]: For example, two rules that I can imagine humans disliking would be 1. No eating vertebrates or cephalopods and 2. No going farther than 1 million km from Earth. I am not even sure that trying to change the AGI's mind on this would be a good idea (since we are vertebrates that do not like not to be eaten and we might want the AGI to impose restrictions like 2 on dangerous aliens).

[4]: Note that this very likely to host an independence gaining AGI because the re-writtable storage is under the 2 MiB limit (512 + 2 * 720 = 1952 <= 2048). New computers such as the 512 KiB Pineapple One RISC-V computer or the Adafruit Trinket M0 would be examples of other computers that would fine.

Tuesday, October 22, 2024

Truth, and How not to find it

This sermon is on various thoughts on truth that I gave at the Unitarian Universalist Church in Idaho Falls on October 20th 2024. HTML, PDF, Youtube

Sunday, October 1, 2023

Moral Questions for the 2nd and 3rd Millennium

I gave a sermon on Moral Questions for the 2nd and 3rd Millennium on Sunday 2023-October-1 on moral questions for the 2nd and 3rd millennium, how new technology like AI, nuclear weapons, and horse drawn combine harvesters lead to new ethical questions. Also available in PDF

Sunday, November 20, 2022

Open source house plans

Here are two places I have found open source house plans.
The first is by Jay Osborne and he has put up three CC 4.0 BY-SA farmhouses: Free Farmhouse
The second is an on going project to create various sustainable houses that is open sourcing the plans: One Community Global

Wednesday, September 28, 2022

Alpha and Omega, Omicron and LaMDA

I gave a sermon last Sunday on Alpha (Genesis), Omicron (Evolution), LaMDA and Omega (The End of Human dominance).

So if we create an AGI and fail to get sufficiently good Ethics in ver, the result is extinction or hell. ... Evolution mindlessly created beings with better Ethics than it. Hopefully, we can mindfully create beings with better Ethics than us.