How Role-Playing Games and Early Computing Shaped Four Decades of AI Development
Part 1: Early Foundations (1977-1990)
How Role-Playing Games and Early Computing Shaped Four Decades of AI Development
Author: Hawke Robinson
Series: The Path to DGPUNET and SIIMPAF
Published: October 2025
Reading Time: ~12 minutes
Series Overview
This is Part 1 of a 5-part series documenting the technical evolution from early introduction to role-playing gaming in 1977 and hobby programming in 1979 to modern distributed GPU computing and self-hosted Artificial Intelligence (AI) infrastructure. The series traces patterns that emerged over four decades and shows how they apply to current challenges in AI development and computational independence.
The Series:
- Part 1 (this article): Early Foundations (1977-1990) - RPGs, first programs, pattern matching, and establishing core principles
- Part 2: Building Infrastructure (1990-2005) - IRC bots, NLP, Beowulf clusters, building ISPs & data centers, distributed computing, and production systems
- Part 3: Professional Applications (2005-2020) - Therapeutic gaming, educational technology, and real-time AI that outperformed commercial solutions
- Part 4: The GPU Challenge and DGPUNET (2020-2025) - GPU scarcity, centralization concerns, and building distributed GPU infrastructure
- Part 5: SIIMPAF Architecture and Philosophy - Bringing four decades of lessons together in a comprehensive self-hosted AI system
Early Foundations
In 1977, a cousin introduced me to role-playing games. I was young, and the experience did something I wasn't expecting - it changed how I thought about systems, interactions, and possibilities. Dungeons & Dragons wasn't just a game; it was a framework for thinking about dynamic interactions between rules, randomness, people, complex systems, and human choice.
I found myself organizing game sessions at libraries, game stores, parks, schools, and community centers. Not because I had grand plans, but because I wanted to play and the nature of these social games required other people to play with. That early experience of coordinating groups, managing logistics, and creating experiences for others established effective patterns stilll useful to this day.
Two years later, in 1979, another cousin gave me something that would sometimes connect with, and expand, those early RPG experiences in ways I wouldn't fully understand for years - access to the University of Utah's computer network, a part of what would eventually become the Internet. At 8 or 9 years old, I was online and learning to code, interacting with people across the USA, Germany, and Australia through electronic mail and bulletin boards.
"Insult Your Computer"
One of my first fully functional slightly more advanced programs was called "Insult Your Computer." Written in BASIC around 1979, it was exactly what it sounds like - a program where you could type insults at the computer and it would respond. Anyone working with computers has been frustrated when it doesn't do what you want. And sometimes, especially debugging code, this could be quite frustrating. However, just grumbling at the inanimate and inplacable computer wasn't very rewarding, So, I thought, and since I was just a young kid, it would be entertaining if my computer had smart alecky responses to my grumblings, based on what I said.
The program worked through simple keyword matching and percentage-based probability response generation. If your input contained certain words, the program would pick from different response categories and root phrases (think Mad Libs for witty retorts). The percentages determined which response you'd get, adding some randomness so it didn't feel completely scripted, and would get a broad range of variety that was not fully predictable and avoid regular duplication. I wrote variations of this based on whatever computer I had access to at the time, Apple ][, Apple II+, TI-99, TRS-80, decommissioned PDP, etc.
Roughly this looked somewhat like this:
10 REM Insult Your Computer - circa 1979
20 REM Hundreds of keyword patterns and response variations
30 DIM INSULT$(50), REPLY$(200)
40 REM Load insult keywords and reply variations (abbreviated here)
50 INPUT "Say something: "; A$
60 REM Check for multiple keyword patterns
70 GOSUB 1000: REM Analyze input for keyword matches
80 IF MATCH = 0 THEN PRINT "I don't understand.": GOTO 50
90 REM Pick random response from matched category
100 R = INT(RND(1) * REPLIES): GOSUB 2000
110 PRINT RESPONSE$
120 GOTO 50
1000 REM Pattern matching subroutine
1010 REM Checks dozens of keywords: stupid, slow, dumb, ugly, broken, annoying, stinky, etc.
1020 REM Assigns category based on insult type
1030 REM Full implementation was hundreds of lines with varied patterns
1040 RETURN
2000 REM Response generation subroutine
2010 REM Picks from category-appropriate responses
2020 REM Varied by insult type, severity, repetition
2030 REM Range included tame retorts to... colorful 9-year-old humor
2040 RETURN
The actual program was much more extensive - hundreds of lines covering many keyword patterns and response variations with the kind of humor you'd expect from a 9-year-old kid. While not sophisticated by modern standards, it introduced me to several concepts that I'd revisit throughout my career:
- Pattern matching in user input
- Percentage-based probabilistic response selection
- Creating the illusion of intelligence through variation
- The importance of context in responses
The program wasn't "intelligent" by any measure, but to a kid typing insults at a computer, it felt responsive. That gap between actual capability and perceived intelligence became something I'd explore for decades.
Making NPCs Feel Alive
Around the same time as the insult programs, I also started writing role-playing game programs, some as aides to help tabletop game mastering, some to help players, or to make player characters, computer-generated music, and of course RPG adventures (just text-based initially). The games themselves weren't the hard part - tracking stats, rolling dice, and managing combat sequences was straightforward logic. The challenge was making the non-player characters (NPCs) and creatures feel less mechanical.
Early attempts were simple:
1000 REM NPC Greeting
1010 R = INT(RND(1) * 4)
1020 IF R = 0 THEN PRINT "Hello, traveler."
1030 IF R = 1 THEN PRINT "Greetings & salutations."
1040 IF R = 2 THEN PRINT "Hail and well met!"
1050 IF R = 3 THEN PRINT "Good day to you."
But I wanted more. I started tracking:
- Whether the player had talked to this NPC before
- What the player had said or done recently
- The NPC's mood based on recent events (And some randomization)
- Simple memory of previous interactions written to files for future reference
This meant a primitive form of maintaining state between encounters and adjusting responses based on history. On an early PC with limited memory, this required thinking carefully about what to track and what to ignore.
The RPG design perspective influenced how I thought about these systems. In tabletop RPGs, the game master remembers previous sessions, adjusts NPC reactions based on player actions, and creates continuity. I was trying to capture that in code, within severe hardware limitations.
First Commercial Work
In 1982, around 11-12 years old, I had the chance for my first paid programming gig - a point of sale and inventory management system for a mom-and-pop video rental store in Salt Lake City, Utah. They paid somewhere between $600 and $800, which seemed like an enormous amount of money to me at that age (that would be around $2,000-$2,700 in today's dollars).
The system ran on an IBM PC and handled:
- Customer rentals and returns
- Receipt printing for the customers, with their return dates, terms of service, etc.
- Inventory tracking
- Late fee calculations
- Customer contact information
- Reports and printing capabilities for the owners
Nothing fancy, but it had to be reliable. This was their business. If the system lost data or calculated fees incorrectly, it cost them money and customer trust. This was different from hobby projects where bugs were frustrating but not financially consequential.
It took me a few weeks, during the summer to have a first draft, and a few more weeks to wrap it up during the summer. The store used this system for years thereafter. Knowing that something I wrote was being used daily, for real business operations, changed how I thought about code. It wasn't enough for it to work most of the time - it had to work consistently, handle unexpected inputs, and recover gracefully from problems. Also, since the owners were completely non-techie, it had to be absolutely user-friendly, clearly explaining what was wrong, no cryptic technical error messages, but "average human" understandable messages with useful instructions for resolution.
I learned to:
- Validate all user inputs
- Save frequently and redundantly
- Provide clear error messages
- Test with real-world scenarios, not just ideal cases
- COMMENT MY CODE HEAVILY!
- Write and print out documentation people could actually use
These lessons came from necessity. When the store called with a problem in those first few weeks, I had to fix it or walk them through a solution. To bike ride / skateboard downhill to their store would take me over 20 minutes to get there, and If my documentation was unclear, they couldn't use the system.
Amiga 2000 Experiments
In the late 1980s and early 1990s, I had an Amiga 2000. For those who remember, the Amiga was WAY ahead of its time (arguably 10+ years ahead of Mac & PC systems, competing with far more expensive UNIX systems in many ways) - multitasking, sophisticated graphics and sound "multimedia" before it was a buzzword, and an open hardware architecture that made upgrading, customizing, and achieving many things far easier than they were on Mac or PC systems.
I used it heavily for music and graphics creation, but also to build on the prior experiments that continued the themes from earlier work:
- More sophisticated NPC behaviors
- Graphics and animation for game interfaces
- Sound integration for more immersive experiences
- Better text parsing for player inputs
- Generating landscapes, maps, and much more
The Amiga's capabilities let me explore ideas that weren't practical on earlier hardware, but the core challenges remained: how do you make computer interactions feel more natural and responsive?
BBS Hosting and Early Automation
I started hosting bulletin board systems (BBSs) from my apartment in the 1990s, which meant running a system that other people connected to via modem. BBSs required constant maintenance - managing users, organizing message boards, handling file uploads and downloads, and dealing with problem users. I only had a couple of lines dedicated to this, but it was kept busy. Later this was connected to ISDN 128kbps dedicated Internet connection through Xmission, and made me a primitive gateway between local BBS users and the Internet. I also hosted some early "virtual worlds" like various MUDS, MUSHES, MOOs, and later even Ultimate Online persistent graphical worlds using UOX and similar tools.
This led to creating and increasing amount of automation work to not get overwhelmed with frequent administrative tasks. I wrote tools to:
- Automatically clean up old messages
- Scan uploaded files for problems (corruption, viruses, etc.)
- Manage user permissions or other account issues
- Generate activity reports
- Backup data at scheduled times
- Perform automated syncs, searches, uploads, downloads, war-dialing db updates, etc.
These weren't optional conveniences - they were necessary to keep the BBS functional without spending every waking hour on maintenance. The pattern of "automate the repetitive, preserve time for the interesting" ramped up here and has never stopped.
Looking Back at Patterns
From 1977 to 1990, several patterns emerged that would shape everything that came later:
1. Make systems feel more responsive and human-like - From "Insult Your Computer" through NPC behaviors to BBS automation, I kept returning to the challenge of making computer interactions feel less mechanical.
2. Work within constraints - Limited memory, slow processors, extremely slow networks (300 baud!) and expensive (and fragile) storage meant thinking carefully about what was essential versus nice-to-have.
3. Reliability matters - The video rental store system taught me that code used in production contexts requires different standards than hobby projects. This systematic diagnostic thinking - understanding complex interconnected systems and methodically investigating root causes - would prove applicable across many domains (From Engine & Drivetrain Repair to Neural Networks).
4. Automate maintenance - BBS hosting showed that systems need to maintain themselves as much as possible.
5. Context and memory improve interactions - Tracking previous interactions and adjusting responses made NPCs more interesting and made automation more useful.
These weren't philosophical insights at the time - they were practical responses to problems I encountered. But looking back, they formed a foundation for how I'd approach much larger and far more complex systems in the decades to come.
What Came Next
By 1990, I'd spent over a decade on and off learning to code, building small systems, and solving practical problems. The next phase would involve larger scale systems, professional contexts, and infrastructure that connected multiple machines into cohesive networks.
The Internet was expanding beyond academic and government networks. IRC (Internet Relay Chat) was becoming popular. Businesses were starting to need real network infrastructure. And I was about to learn that the patterns I'd developed on single computers scaled to networks and distributed systems in interesting ways.
That's where Part 2 of this series picks up - moving from single-system optimization to network-scale infrastructure, from hobby projects to production systems handling real load, and from simple automation to complex distributed computing.
Next in Series: Part 2: Building Infrastructure (1990-2005) - IRC bots, Beowulf clusters, and learning to build reliable systems at scale
About This Series: This is Part 1 of a 5-part series documenting the technical evolution from early hobby programming to DGPUNET (Distributed GPU Network) and SIIMPAF (Synthetic Intelligence Interactive Matrix Personal Adaptive Familiar). The series focuses on practical problems and solutions, avoiding marketing language in favor of technical accuracy and honest assessment.
About the Author: William Hawkes-Robinson has been developing software since 1979, with focus areas including distributed computing, natural language processing, educational technology, and therapeutic applications of gaming. He is the founder of RPG Research and Dev2Dev Portal LLC, and is known internationally as "The Grandfather of Therapeutic Gaming" for his long-running work applying role-playing games to therapeutic and educational contexts.
Website: https://www.hawkerobinson.com Tech Blog: https://techtalkhawke.com
Version: 2025.10.17-1215