Friday, 24 April 2026

Migrating from BlueCat BAM v1 to v2 REST API: Authentication, Code Examples, and What Changed

Why This Post Exists

If you've worked with BlueCat Address Manager's REST API, you've probably used the v1 endpoints under /Services/REST/v1/. Those examples are all over the internet -- including BlueCat's own making-apis-work-for-you repo on GitHub, which was the go-to reference for years.

The problem: v1 is deprecated. BlueCat introduced the RESTful v2 API in BAM 9.5.0, and the v1 API is now officially called the "Legacy v1 API." If you're writing new integrations or maintaining existing ones, you need to migrate.

I went looking for an updated version of those examples and found that nobody had done it. The original making-apis-work-for-you repo hasn't been meaningfully updated since 2018. The example scripts are a mix of Python 2 and Python 3, and the REST examples all target the v1 API. BlueCat shipped the v2 API years ago, published migration guides, even released an official Python SDK (bluecat-libraries) -- but the reference examples that most people find first on Google and GitHub were never brought forward. There are a handful of community wrappers on GitHub (see pyBluecat, py-bluecat-bam-sdk), but none of them provide the kind of simple, runnable, "Episode 1 through 7" walkthrough that the original repo did.

So I built one. I rewrote every example from the original BlueCat repo using the v2 API in both Python and TypeScript, added an office network template provisioner that didn't exist before, and open-sourced it all under Apache 2.0: elephantatech/bluecat-bam-v2-examples. This post walks through the biggest changes, with code you can copy.

Official docs

Here are the docs you'll want bookmarked:

Every BAM instance also ships with interactive Swagger docs at https://<your-bam>/api/docs and the OpenAPI 3.0 spec at https://<your-bam>/api/openapi.json.


Before you write any code: BAM-side setup

The API is always on -- there's no switch to enable it. But your user account has to be configured correctly or every call will fail with a 401. This trips people up because a user who works fine in the web UI might not have API access at all.

Create a dedicated API user

In the BAM web UI, go to Administration > Users and Groups and create a new user. The fields that matter (full docs: Creating an API user in Address Manager):

  • Username -- something like svc-provisioner or api-automation. Don't reuse a personal account for scripts.
  • Access Type -- this is the one that bites you. There are three options:
    • GUI -- web UI only. Cannot use the API at all.
    • API -- API only. Cannot log in to the web UI.
    • GUI and API -- both.
    For automation, use API. If the user is set to "GUI" only, your login calls will fail, make sure API is also selected for api access. Additionally you also need read and/or write access the object on Bluecat Address Manager.
  • User Type -- Administrator has access to everything (DNS, IPAM, DHCP, system settings). Non-Administrator is limited to DNS and IPAM only and needs explicit access rights on every object it should touch.

Set access rights

If you're using a Non-Administrator user, you need to grant access rights on the objects the user will work with. Access rights are hierarchical -- they're set per object (Configuration, Block, View, Zone, Network) and inherit down the tree. The levels are:

  • View -- read-only
  • Add -- can create child objects
  • Change -- can modify objects
  • Full Access -- create, modify, delete

For a provisioner that creates networks, zones, and DHCP scopes, you need at least Full Access on the target Configuration. Or you can be more granular and set Add + Change on specific Blocks and Views. An Administrator user skips all of this -- it has implicit full access everywhere.

Other things to check

  • HTTPS -- BAM ships with a self-signed certificate. Your API client will reject it unless you either import the cert into your trust store or disable verification (verify=False in Python, rejectUnauthorized: false in Bun). For production, get a proper CA-signed cert on the BAM -- configure it under Administration > HTTPS Configuration.
  • Session timeout -- the API token lifetime is tied to the BAM session inactivity timeout. The v2 session response includes apiTokenExpirationDateTime (usually 24 hours). If your script runs longer than the timeout, you'll need to re-authenticate.
  • Firewall -- BAM has a built-in firewall under Administration > Firewall. There's no API-specific IP allowlist, but you can restrict which IPs can reach BAM on port 443. If your automation server is getting connection refused, check here.
  • No rate limiting -- BAM doesn't throttle API requests. But it's an appliance with finite resources, so don't hammer it with hundreds of concurrent sessions. Reuse your session and call logout when you're done.

What changed

Area v1 (Deprecated) v2 (Current)
Base URL /Services/REST/v1/ /api/v2/
Auth GET with credentials in URL POST with JSON body
Token format Parse from string: "Session Token-> BAMAuthToken: abc123" JSON field: {"apiToken": "abc123"}
API style RPC: POST /v1/addHostRecord?viewId=123&absoluteName=... REST: POST /api/v2/zones/456/resourceRecords
Response format Pipe-delimited properties: "name=x|connected=true|" Clean JSON objects
Hierarchy traversal getEntityByName calls chained manually Resource paths: /configurations/{id}/views/{id}/zones
Documentation Static docs only OpenAPI 3.0 spec + Swagger UI on every BAM instance

BlueCat's Official Python SDK: bluecat-libraries

BlueCat does have an official Python SDK. The bluecat-libraries package on PyPI is published and maintained directly by BlueCat Networks. I verified this -- the PyPI metadata lists "BlueCat" as both author and maintainer, the homepage points to docs.bluecatnetworks.com, and the copyright is "BlueCat Networks (USA) Inc. and its affiliates and licensors." It covers the v2 API, the legacy v1 API, Failover, DNS Edge, and Micetro. Latest version is 25.3.0 (November 2025), requires Python 3.11+, Apache 2.0 licensed.

pip install bluecat-libraries

So why did I write my own client? The official SDK is the right choice for production systems, but it hides what's happening over the wire. My examples use raw requests calls in a thin wrapper so you can see every URL, every JSON body, every header -- which makes it better for learning the API. There's also no TypeScript/Node.js SDK from BlueCat at all, so the Bun examples in this repo fill that gap.

Not every project needs the full library either. If you're writing a small cron job that registers a few DNS records or a provisioning hook that grabs the next available IP for a new VM, a single-file client with just requests is simpler to drop into an existing project and easier for your team to read without studying SDK docs. For lightweight automation, a thin wrapper is all you need. For production IPAM tooling or multi-API workflows across BAM and DNS Edge, use bluecat-libraries.


Authentication: where it actually breaks

If anything trips you up during migration, it'll be auth. The v1 and v2 approaches have nothing in common.

v1 auth (old way)

In v1, you send credentials as GET parameters in the URL. Yes, really. And the response is a raw string you have to parse:

# v1: Credentials in the URL (!) + string parsing
import requests

bam_url = "https://bam.lab.corp"
login_url = f"{bam_url}/Services/REST/v1/login?username=api&password=pass"

# Credentials sent as GET parameters -- visible in logs, browser history, proxies
response = requests.get(login_url)

# response.json() returns a plain string in v1, not a dict
# It looks like: "Session Token-> BAMAuthToken: abc123 ..."
# You have to split it manually to extract the token
token = str(response.json())
token = token.split()[2] + " " + token.split()[3]

# Then set it as a header for subsequent calls
header = {'Authorization': token, 'Content-Type': 'application/json'}

# ... make API calls ...

# Logout
requests.get(f"{bam_url}/Services/REST/v1/logout?", headers=header)

The obvious problems: your password is in the URL, which means it shows up in server logs, proxy logs, and browser history. The token comes back as a raw string you have to split on whitespace. And if auth fails, you get no structured error -- just a string.

v2 auth (new way)

In v2, you POST JSON to a sessions endpoint and get back actual JSON:

# v2: JSON body + structured response
import requests

bam_url = "https://bam.lab.corp"

# Credentials sent in POST body -- not in the URL
resp = requests.post(
    f"{bam_url}/api/v2/sessions",
    json={"username": "admin", "password": "pass"},
)
resp.raise_for_status()
data = resp.json()

# Token is a clean JSON field -- no string parsing
token = data["apiToken"]

# The response also includes:
#   data["apiTokenExpirationDateTime"] -- when the token expires (~24h)
#   data["basicAuthenticationCredentials"] -- pre-encoded base64 for Basic auth

# Use Bearer auth for subsequent calls
session = requests.Session()
session.headers.update({
    "Authorization": f"Bearer {token}",
    "Content-Type": "application/json",
})

# ... make API calls ...

# Logout
session.delete(f"{bam_url}/api/v2/sessions")

Credentials stay in the POST body, not the URL. Token is a JSON field you can just read. You also get an expiry timestamp so you know when to refresh, and proper HTTP status codes when something goes wrong. Both Bearer and Basic auth work.

Same thing in TypeScript / Bun

// v2 auth in TypeScript (Bun)
const resp = await fetch("https://bam.lab.corp/api/v2/sessions", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({ username: "admin", password: "pass" }),
});

const { apiToken } = await resp.json();

// All subsequent requests use Bearer token
const data = await fetch("https://bam.lab.corp/api/v2/configurations", {
  headers: { Authorization: `Bearer ${apiToken}` },
}).then(r => r.json());

A simple v2 client

In the original repo, every script repeated the same boilerplate -- URL construction, token parsing, header building. With v2, you can wrap all of that once and forget about it:

import requests

class BAMClient:
    def __init__(self, url, username, password):
        self.api_url = f"{url.rstrip('/')}/api/v2"
        self.session = requests.Session()
        self.username = username
        self.password = password

    def login(self):
        resp = self.session.post(
            f"{self.api_url}/sessions",
            json={"username": self.username, "password": self.password},
        )
        resp.raise_for_status()
        self.session.headers.update({
            "Authorization": f"Bearer {resp.json()['apiToken']}",
            "Content-Type": "application/json",
        })

    def logout(self):
        self.session.delete(f"{self.api_url}/sessions")

    def __enter__(self):
        self.login()
        return self

    def __exit__(self, *args):
        self.logout()

    def get(self, path, params=None):
        resp = self.session.get(f"{self.api_url}{path}", params=params)
        resp.raise_for_status()
        return resp.json() if resp.content else None

    def post(self, path, data=None):
        resp = self.session.post(f"{self.api_url}{path}", json=data)
        resp.raise_for_status()
        return resp.json() if resp.content else None

Usage:

with BAMClient("https://bam.example.com", "admin", "password") as bam:
    configs = bam.get("/configurations")
    for cfg in configs["data"]:
        print(f"  [{cfg['id']}] {cfg['name']}")

Endpoint changes, side by side

Adding a host record

In v1 you had to walk the hierarchy with getEntityByName calls, build URLs with string concatenation, and pass everything as query parameters:

# v1: Walk Config -> View, then construct URL manually
getEntityByName = mainurl + "getEntityByName?parentId=" + str(e_parentId) \
                        + "&name=" + e_name + "&type=" + e_type

addHostRecord = mainurl + "addHostRecord?viewId=" + str(r_viewId) \
                + "&absoluteName=" + r_absoluteName \
                + "&addresses=" + r_addresses \
                + "&ttl=" + r_ttl \
                + "&properties=reverseRecord=true"

response = requests.post(addHostRecord, headers=header)

v2 -- resource paths with JSON bodies. The hierarchy traversal is still there, but it reads like actual REST:

# v2: Clean REST resource paths
with BAMClient(url, user, password) as bam:
    config = bam.get("/configurations", params={"filter": "name:eq('main')"})
    config_id = config["data"][0]["id"]

    view = bam.get(f"/configurations/{config_id}/views", params={"filter": "name:eq('default')"})
    view_id = view["data"][0]["id"]

    # Find zone, then create host record -- JSON body, not query string
    zones = bam.get(f"/views/{view_id}/zones")
    zone_id = zones["data"][0]["id"]

    bam.post(f"/zones/{zone_id}/resourceRecords", data={
        "type": "HostRecord",
        "name": "FINRPT02",
        "addresses": [{"address": "192.168.0.16"}],
        "reverseRecord": True,
    })

DHCP reservation

This one was always painful in v1. The assignNextAvailableIP4Address call needed host info packed into a comma-delimited string, and properties as pipe-delimited key-value pairs:

# v1: Encoding nightmare
hostInfo = hostname + "." + zonename + "," + str(viewinfo['id']) + ",true,false"
params = {
    "configurationId": config_id,
    "parentId": network_id,
    "macAddress": "BB:CC:DD:AA:AA:AA",
    "hostInfo": hostInfo,
    "action": "MAKE_DHCP_RESERVED",
    "properties": "name=appsrv23|locationCode=US DAL|",
}
response = requests.post(assignNextAvailableIP4Addressurl, params=params, headers=header)

v2 -- just JSON:

# v2: Just JSON
bam.post(f"/networks/{network_id}/nextAvailableAddress", data={
    "action": "MAKE_DHCP_RESERVED",
    "macAddress": "BB:CC:DD:AA:AA:AA",
    "name": "appsrv23",
})

Server properties: no more pipe parsing

This one drove me crazy. In v1, server properties came back as a single pipe-delimited string and you had to parse them into a dict yourself:

# v1: Parse pipe-delimited properties manually
for server in serverslist:
    propertieslist = list(server['properties'].split("|"))
    propertiesdic = {}
    for item in propertieslist:
        if item is not '':
            shortlist = list(item.split("="))
            propertiesdic[shortlist[0]] = shortlist[1]
    server.update(propertiesdic)
    del server['properties']
# v2: Properties are just JSON fields. That's it.
servers = bam.get(f"/configurations/{config_id}/servers")
for server in servers["data"]:
    print(f"  {server['name']} - connected: {server.get('connected')}")

Office template provisioner

I also added something that wasn't in the original repo at all: a template provisioner. You describe your office in a YAML file -- VLANs, subnets, DHCP pools, hardware MAC addresses, WiFi SSIDs -- and the script creates everything in BAM.

It runs in dry-run mode by default, so you can see what it would create without a BAM instance:

$ uv run python examples/07_office_template.py

[DRY RUN] Provisioning: Chicago Office 1 (chi1)
  Domain: chi1.corp.example.com
  Supernet: 10.40.0.0/20

1. Create IP block: 10.40.0.0/20

2. Create networks:
  VLAN  10 | 10.40.0.0/23       | chi1-vlan10-Corp-Data
  VLAN  30 | 10.40.4.0/22       | chi1-vlan30-Corp-WiFi
  VLAN  40 | 10.40.8.0/23       | chi1-vlan40-Guest-WiFi

5. Hardware provisioning:
  switches        | sw-core01            | 10.40.10.129    | AA:BB:CC:01:01:01
  wireless_aps    | ap-1f-01             | 10.40.10.140    | AA:BB:CC:02:01:01
  printers        | pr-1f-01             | 10.40.0.20      | AA:BB:CC:03:01:01

[DRY RUN] Total actions: 25

Try it out

Python (uv)

git clone https://github.com/elephantatech/bluecat-bam-v2-examples.git
cd bluecat-bam-v2-examples/python
uv sync
uv run python examples/07_office_template.py

TypeScript (Bun)

cd ../nodejs
bun install
bun examples/04-office-template.ts

When you have a BAM instance

export BAM_URL="https://your-bam.example.com"
export BAM_USER="admin"
export BAM_PASS="your-password"
DRY_RUN=false uv run python examples/07_office_template.py

The short version

Stop using v1. BlueCat calls it "Legacy" now. Auth is POST with JSON instead of credentials in the URL. Responses are real JSON instead of pipe-delimited strings. Endpoints are REST resource paths instead of RPC-style query strings. Every BAM instance has Swagger UI at /api/docs and an OpenAPI spec at /api/openapi.json so you can generate clients in any language.

For production Python, use BlueCat's official bluecat-libraries SDK -- it's maintained by BlueCat Networks and covers v2, failover, DNS Edge, and Micetro. For lightweight scripts or TypeScript, there's no official option. The clients in this repo are a good starting point.

The full project with both Python and TypeScript examples is on GitHub under Apache 2.0: elephantatech/bluecat-bam-v2-examples

If you spot something wrong or want to add an example, open an issue or PR.

Thursday, 23 April 2026

How Bad Animations Made Me Rethink Terminal UX — Moon Traveler v0.5.2

When I shipped v0.5.1 of Moon Traveler Terminal, I thought the narrative intro was enough to make the game feel alive. A flight recorder story. A drone scan report with live stats. Text appearing line by line.

It wasn't enough.

The game still felt like a wall of text. You'd type scan, get a dump of locations. Type travel, watch a progress bar crawl. Type talk, and a creature's response would just appear. No weight. No presence. The terminal was functional, but it was dead.

The Problem Wasn't Missing Animations — It Was Missing Breath

I didn't realize this until I watched someone else play. They were rushing through commands, barely reading the output. Scan, travel, talk, scan, travel. The game had no rhythm. Every action completed instantly and blurred into the next.

Terminal games have a unique problem. You don't have visual cues like camera movement or screen transitions to tell the player "something just happened." Without those cues, every command feels the same. A hazard event that should be tense gets the same visual treatment as checking your inventory.

The First Attempt Was Terrible

My first animation attempt was appending frames to the RichLog — the scrollable game output. Each frame was a new line. The scan animation looked like this scrolling down the screen:

((.))  ((.))
((.))  ((.))
((.))  ((.))
((.))  ((.))

Four identical lines cluttering the game log. You couldn't scroll back to read previous output without wading through animation debris. It was worse than no animation at all.

The fundamental issue: RichLog is append-only. You can't update a line in place. Every frame permanently adds to the scroll history.

The Fix: A Dedicated 2-Line Widget

The breakthrough was adding a Static widget between the game log and the status bar. Just two lines. That's it.

┌── RichLog (scrollable game output) ──┐
│                                       │
├── #animation-bar (2 lines, in-place) ─┤
├── Status Bar (vitals) ────────────────┤
│  Crash Site >  [input]                │
└───────────────────────────────────────┘

The #animation-bar uses height: auto; max-height: 2 in the Textual CSS. When empty, it collapses to zero height — invisible. When an animation plays, it expands to show the sprite. When done, it collapses again. The game log never gets polluted.

The CSS is dead simple:

#animation-bar {
    height: auto;
    max-height: 2;
}

Every animation function follows the same pattern:

  1. Check if animations are enabled (config + runtime flag)
  2. Check if the animation widget exists (TUI bridge present)
  3. Play frames in place, each replacing the previous
  4. Hold the last frame so the player can read it
  5. Clear the widget

If animations are disabled, fall back to a simple text message. No crash, no empty space.

Why Timing Is Everything

Here's the thing nobody tells you about terminal animations: the frames don't matter nearly as much as the pauses between them.

My first scan animation used a 0.15-second delay between frames. It was a blur. You couldn't even see the radar spinning. Bumped it to 0.35 seconds and suddenly it felt like the drone was actually scanning something. The player's eye had time to register each frame.

But timing isn't just about animation speed. It's about the gaps between actions.

I added a beat() function — a simple time.sleep(0.8) after every valid command. No visual output. Just a pause. That tiny breath between "you typed something" and "the game responds to the next thing" changed the entire feel. It gave each action weight.

def beat(duration: float = 0.8):
    """Pause to let the player absorb output."""
    if not _enabled():
        return
    time.sleep(duration)

Here's what the game flow looks like with proper timing:

Crash Site > scan
  [scan animation — 0.35s per frame]
  [results appear]
  [beat — 0.8s pause]
Crash Site > travel Frost Ridge
  [drone flies across the field]
  Arrived at Frost Ridge.
  [beat]
Frost Ridge > look
  [eyes scan left to right — 0.35s]
  [location description]
  [beat]

Without those beats, the output runs together. With them, each command feels like a distinct moment. The player naturally slows down and reads what's on screen.

The Drone That Grows With You

The most satisfying animation to build was the travel drone. It's a 2-line ASCII sprite that moves across the animation bar as you travel between locations.

The drone evolves with upgrades. Here's the full progression:

Base drone (no upgrades):
  [ ]--(+)--[ ]
  \___________/

After 1 upgrade (range module):
  [O]--(+)--[O]
  \[]_________/

After 2 upgrades:
  [O]--(+)--[O]
  \[]_______[]/

After 3 upgrades:
  [O]--(+)--[O]
  \[][]_____[]/

After 4 upgrades:
  [O]--(+)--[O]
  \[][]___[][]/

FULLY UPGRADED (5 modules):
  [O]--(+)--[O]
  \[][][]_[][]/

The eyes change from empty to O with any upgrade — the drone "wakes up." The belly fills with [] pairs as you install modules. Each pair fills from the edges inward, creating a visual symmetry.

The Debugging Adventures

This was tricky to get right. Two bugs that cost me hours:

Rich eating the eyes. The terminal rendering library, Rich, interprets [o] as a markup style tag. The drone's eyes would vanish. Every [ in the sprite needs escaping as \[. But ] does NOT need escaping — adding \] makes a literal backslash appear. I learned this the hard way.

The drunk drone. The top line was 13 characters. The bottom was 12. Off by one. As the drone moved across the screen, the bottom line would drift to the left, making it look like it was flying sideways. I expanded the belly from 9 to 11 characters to match — both lines exactly 13 visible characters.

# The alignment that took an entire session to debug:
[O]--(+)--[O]   ← 13 chars
\___________/   ← 13 chars (\ + 11 belly + /)

Late-Game Tension Through Animation

After 24 in-game hours, the animations shift. The scan radar picks up interference frames. Hazard flashes use triple exclamation marks. The travel animation occasionally glitches with interference patterns.

Normal scan:      ((*))  ((*))
Late-game scan:   ((!))  ((!))   ← interference

Normal hazard:    /!\
                  — HAZARD —

Late-game hazard: /!\
                  !!! HAZARD !!!

It's a small touch, but it reinforces the narrative: the environment is deteriorating. The longer you survive, the more hostile Enceladus becomes. The animations tell that story without a single word of dialogue.

The Animation System Architecture

The whole system is one file: animations.py, 250 lines. Here's the gate pattern every function uses:

def _can_animate() -> bool:
    """Check if the animation widget is available."""
    return _enabled() and hasattr(ui.console, "animate_frame")

def scan_sweep(hours_elapsed=0):
    if not _can_animate():
        ui.console.print("  Scanning surroundings...")
        time.sleep(0.5)
        return
    # ... play animated version ...

The _can_animate() gate checks two things: is the animation system enabled (config setting + runtime flag), and does the TUI bridge exist. This means animations work in the full TUI, gracefully degrade in headless mode, and can be toggled with --disable-animation.

Thread safety comes from Textual's call_from_thread. The game logic runs in a worker thread, but widget updates happen on the main thread. The bridge handles the crossing:

def animate_frame(self, text):
    bar = getattr(self._app, "_animation_bar", None)
    if bar:
        self._app.call_from_thread(bar.update, text)

What I Learned

  1. Two lines is enough. You don't need elaborate terminal graphics. A 2-line widget that appears and disappears adds more life than any amount of scrolling text art.
  2. Timing matters more than frames. A well-timed pause does more than a complex sprite. The beat() function is literally just time.sleep(0.8) and it transformed the game's pacing.
  3. In-place updates beat scrolling. Animations that append to the scroll history are worse than no animations. Dedicated widgets that update in place are the way.
  4. Always hold the last frame. 0.6 seconds before clearing. If the animation vanishes instantly, the player's brain never registers it happened.
  5. Fallback gracefully. Every animation checks if the widget exists before playing. If it doesn't, print a simple message. Never crash, never leave blank space.
  6. Let the drone grow. Cosmetic progression that reflects real gameplay changes — the belly filling with modules — gives the player visible proof their effort matters. Even in ASCII.

The game went from a wall of text to something that breathes. Every scan feels like a scan. Every trip feels like a journey. Every hazard feels dangerous. All it took was two lines of screen space and some carefully tuned time.sleep() calls.


Moon Traveler Terminal v0.5.2 is out now — ASCII animations, drone sprite evolution, in-place upgrades, and LLM performance diagnostics.

GitHub | Release Notes | All Releases

Tuesday, 21 April 2026

Automated Screenshot Testing for a Python Terminal Game

I've been building a terminal-based survival game in Python using the Textual framework. It runs as a TUI (Terminal User Interface) with Rich markup, threaded game logic, and SQLite for persistence. The game is called Moon Traveler Terminal.

The problem: I needed automated screenshots for documentation, GitHub Pages, and regression testing. Taking them manually every release was not sustainable. So I built a pilot script that plays the entire game, captures 27 screenshots at key moments, and validates the output.

Here's what I learned and the problems I had to solve.

The Architecture Challenge

The game has a split-thread design:

  • Worker thread - game logic, LLM inference, time.sleep() for animations
  • Main thread - Textual UI rendering, input handling

Every print() call in the game routes through a thread-safe bridge to Textual's RichLog widget:

class UIBridge:
    def print(self, *args, **kwargs):
        # Route to Textual's RichLog via call_from_thread
        self._app.call_from_thread(self._log.write, args[0])

Each print is immediately visible in the UI. No buffering. This is great for the player but tricky for automated testing - you never know exactly when output finishes rendering.

Step 1: Textual's Auto-Pilot

Textual has a built-in test harness. You pass an auto_pilot coroutine to app.run() and it drives the UI programmatically. No real terminal needed.

from src.tui_app import MoonTravelerApp

async def screenshot_pilot(pilot):
    app = pilot.app

    async def take(name, desc):
        await pilot.pause(0.5)
        app.refresh()
        await pilot.pause(0.5)
        app.save_screenshot(f"assets/{name}.svg")

    async def send(text, wait=4.0):
        app.command_queue.put(text)
        await pilot.pause(wait)

    # Take title screenshot
    await pilot.pause(3.0)
    await take("tui-title", "Title screen")

    # Play the game
    await send("look", wait=3.0)
    await take("tui-look", "Look at crash site")

    await send("scan", wait=3.0)
    await take("tui-scan", "Scan results")

app = MoonTravelerApp()
app.run(auto_pilot=screenshot_pilot)

Three ways to talk to the game:

MethodWhat it does
command_queue.put(text)Injects a command (like typing + Enter)
bridge.push_response(text)Answers interactive prompts (y/n, menus)
wait_for_ask_mode()Polls until the game blocks on a prompt

Step 2: Handling Branching Game Flows

The game has branching prompts - new game vs load, difficulty selection, player name. You cannot just hardcode sleep timers. You have to wait for the game to actually ask a question.

async def wait_for_ask_mode(timeout=10.0):
    """Wait until the game blocks on a prompt."""
    elapsed = 0.0
    while elapsed < timeout:
        if app._ask_mode:
            return True
        await pilot.pause(0.3)
        elapsed += 0.3
    return False

# Navigate: New Game -> Easy -> Player name
if await wait_for_ask_mode(timeout=5.0):
    await respond("1", wait=2.0)      # "New Game"
if await wait_for_ask_mode(timeout=5.0):
    await respond("1", wait=2.0)      # "Easy" difficulty
if await wait_for_ask_mode(timeout=5.0):
    await respond("Screenshot", wait=3.0)  # Player name

This pattern made the script reliable across different game states - fresh install with no saves, existing saves, different model loading times.

Problem 1: Screenshots Only Capture the Viewport

This was the first real surprise. Textual's save_screenshot() exports what's currently visible in the viewport - about 24 lines. Content that scrolled off the top is gone from the SVG.

I built a narrative intro that's 18 lines long. By the time the boot sequence finishes, the heading at the top has scrolled away. My first validation checked for "FLIGHT RECORDER" in the screenshot - it failed because the heading was above the viewport.

The fix: Always validate against text near the bottom of each screen, not the top.

import re

def _svg_text(path):
    """Extract visible text from an SVG screenshot."""
    with open(path) as f:
        return " ".join(
            t.replace("&#160;", " ").strip()
            for t in re.findall(r">([^<]+)<", f.read())
            if t.strip() and len(t.strip()) > 2
        )

# Validate bottom-of-screen content, not headers
validations = [
    ("tui-intro", "rescue", "Intro narrative visible"),
    ("tui-help", "drone", "Help shows commands"),
    ("tui-victory", "Grade", "Victory shows score"),
    ("tui-scores", "Ripley", "Leaderboard has entries"),
]

for name, expected, desc in validations:
    text = _svg_text(f"assets/{name}.svg")
    status = "PASS" if expected.lower() in text.lower() else "FAIL"
    print(f"  {status}: {desc}")

Problem 2: SQLite Data Pollution

The screenshot script seeds fake leaderboard entries so the scores screenshot is not empty:

from src.save_load import record_score

record_score(820, "A", True, "short", 18, 1200, 3, 12345,
             player_name="Ripley")
record_score(650, "B", True, "medium", 35, 2400, 2, 67890,
             player_name="Dallas")
record_score(410, "C", False, "long", 12, 900, 1, 11111,
             player_name="Lambert")

The problem: these entries persisted across runs. After running the script 5 times, I had 15 fake entries in my real database mixed with actual play data.

The fix: Clean up test data on exit, keyed by player name:

import sqlite3

# Always clean up, even if the script crashes elsewhere
with sqlite3.connect(str(db_path)) as conn:
    conn.execute(
        "DELETE FROM leaderboard WHERE player_name "
        "IN ('Ripley', 'Dallas', 'Lambert', 'Screenshot')"
    )

I also added state logging at every screenshot checkpoint - player location, inventory, repair progress, and full DB row counts. When a screenshot looks wrong, the debug log tells you exactly what the game state was at capture time:

def log_game_state(ctx, label=""):
    p = ctx.player
    log(f"[{label}] Loc={p.location_name}")
    log(f"[{label}] Food={p.food:.0f}% Water={p.water:.0f}%")
    log(f"[{label}] Inventory={dict(p.inventory)}")

def log_db_state(label=""):
    with sqlite3.connect(str(db_path)) as conn:
        for table in ["saves", "chat_history", "leaderboard"]:
            n = conn.execute(f"SELECT COUNT(*) FROM [{table}]").fetchone()[0]
            log(f"[DB {label}] {table}: {n} rows")

Problem 3: Animations Break Script Timing

I added ASCII frame animations to the game - scan sweeps, travel progress bars, hazard flashes. Each one adds 0.3 to 1.0 seconds of time.sleep() in the worker thread. The screenshot script's fixed await pilot.pause(3.0) durations were suddenly too tight.

The wrong fix would be to disable animations in the config file - that persists and would turn off animations for the user's next real play session.

The right fix: A runtime kill switch that only lasts for the current process:

# src/animations.py
_force_disabled = False

def force_disable():
    """Session-only. Does NOT persist to config."""
    global _force_disabled
    _force_disabled = True

def _enabled():
    if _force_disabled:
        return False
    from src.config import get_animations_enabled
    return get_animations_enabled()

The game's --super mode (used by test scripts) calls force_disable() at startup. Real players still get animations. Test scripts get deterministic timing.

Problem 4: Capturing the Game Context

The screenshot script needs access to the live game state (player location, creatures, inventory) to make smart decisions - like finding a creature to talk to. But the game context only exists inside the worker thread.

Solution: monkey-patch the game loop to capture the context object:

import threading
from src import game

_game_ctx = None
_ctx_ready = threading.Event()
_original_game_loop = game.game_loop

def _patched_game_loop(ctx):
    global _game_ctx
    _game_ctx = ctx
    _ctx_ready.set()        # Signal that context is ready
    return _original_game_loop(ctx)

game.game_loop = _patched_game_loop

# Later in the pilot:
_ctx_ready.wait(timeout=30)
ctx = _game_ctx

# Now we can query live game state
creature_loc = None
for c in ctx.creatures:
    if c.location_name in ctx.player.known_locations:
        creature_loc = c.location_name
        break

The End Result

The full script plays an entire game: new game, explore, scan, travel to creatures, have LLM-powered conversations, escort allies back to the ship, repair and win. 27 screenshots, 10 validated, all in about 3 minutes.

$ uv run python scripts/tui_screenshots.py

Taking TUI screenshots...
  Saved: assets/tui-title.svg — Title screen
  Saved: assets/tui-intro.svg — Flight recorder narrative
  Saved: assets/tui-crash-site.svg — Crash site after boot
  Saved: assets/tui-look.svg — Look at crash site
  ...
  Saved: assets/tui-victory.svg — Victory screen
Validation: 10 passed, 0 failed
  Cleaned up seeded leaderboard entries
Done! Screenshots saved to assets/

Lessons Learned

  1. Textual's auto-pilot is powerful but you need polling patterns like wait_for_ask_mode() for branching flows. Fixed sleeps alone will not work.
  2. Viewport screenshots miss scrollback. Validate against content near the bottom of the screen, never the top.
  3. Clean up test data. If your script seeds a database, delete those rows on exit. Key by a known player name so you can always find them.
  4. Animations need a runtime kill switch for automated scripts. Never persist test-only config changes.
  5. Log game state at capture time. When a screenshot fails validation, you need the context - not just a failed assertion.
  6. Monkey-patching the game loop to capture the context object is ugly but effective. It lets the pilot script make decisions based on live game state.

The game and all the testing scripts are open source:

https://github.com/elephantatech/moon_traveler

https://elephantatech.github.io/moon_traveler/

Wednesday, 29 June 2016

5 Reasons for not Customizing Appliance Devices

5 Reasons for not Customizing Appliance Devices

As an IT Support Specialist I have seen many things that other support professionals do things on their environments that is not advised or suggested from support perspective. Appliance servers are servers however with a key different they are supposed to run only one software. Example would be like the google appliance or networking devices like the router switches from Cisco. This devices run on an OS that is customized for this purpose only so they do not run standard settings. Some System Administrator might want to run other software to save cost or play around or simply need to for what ever legitimate or illegitimate reason.

Here are 5 reason for not customizing appliance servers

Reason 1 : Break something that is critical for appliance Software

Appliance software is created for a purpose to be hosted on that appliance. So the software and hardware is designed for that in mind. So before trying to installing something or configuring something always check with vendors Support. Do not go to consultants only. There are some fantastic consultants out there but you want to make sure you know your options. Examples are like upgrading java to the latest version on an appliance that requires to have particular version or the appliance software will not work.

Reason 2 : unable to fix problems/upgrade the appliance software due to custom configuration

Sometimes the upgrade path breaks on the appliance because someone decided that they needed to customize the configuration to allow other things to run. Well you might just block the upgrade path to the latest version. Sometimes it is the vendors fault here but most of the time when you customize the software too much you will not able to upgrade you will have to migrate instead so you will have to spend more money to get a new appliance with newer software. So you will not get the same support from vendors for technical issues.

Reason 3 : Utilizing resources that are needed for appliance software

As previously stated the hardware and software on an appliance are created for a specific task and configuration. If you add more services to run outside of the box you run into the problem like lack of hard-drive or over utilizing RAM or over utilizing the network card. You are dealing with finite resources and the device is tested with thousands and sometimes even millions of dollars to do just what it was designed to do. Hacking it out to run multiple services will slow everything down including what the appliance was designed for. Performance is one reason you want a separate appliance if you start hacking to add more services that that was not intended for the device you just threw out an advantage that you had with the appliance.

Reason 4 : Warranty issues with vendors

Most vendors will only give best effort in some cases will void the warranty outright so when you need help when you call the vendor, they will not give the full support that you need. You have hacked and customized things that they are not trained or even experienced in so now good luck trying to get that quick fix you were hoping for since the Support tech first needs to learn what you did then learn how he can do it then see if that is supported. and If it is out of the warranty or paid support agreement well you are out of luck and you might even have to pay more for assistance now.

Reason 5 : Security loophole

What happens when you install software it can open ports that now increase the risk for a security breach. If you know someone in security they will explain this with more services running means more chances of security breach as you increase risk. It is simple if the device what security tested, only allows some services to use those don't add more. for example don't install ftp server on a network device because ftp is weak. You need SFTP or SCP instead however if the device does not support it, don't add any of that since you will be opening ports that are a risk to the network now, and not just the device.



Don't get me wrong sometimes the default configuration can have a security risk or have something that does not work for your enterprise network so It is a good idea to customize and hack out the appliance. Just know what you are getting into talk to your consultants and your vendor Support. You want to talk to Support specifically because they know what can go wrong with a custom configuration or can get the information what the risks are by adding a customization.

Friday, 17 June 2016

Search Large files on Linux



Ever wonder how to find large files in Linux but you have well there is the find command you can use here some examples I found really helpful.

To find files larger than 100MB:

find . -type f -size +100M

If you want the current dir only:

find . -maxdepth 1 -type f -size +100M

If you wish to see all files over 100M and to see where they are and what is their size try this:

find . -type f -size +100M -exec ls -lh {} \;

If you wish to check all the files in the system then run the command from the system root (/) directory with sudo.

cd \
sudo find . -type f -size +100M -exec ls -lh {} \;

Monday, 26 January 2015

Becel Heart&Stroke Ride for Heart 2015

Every 7 minutes someone dies from heart disease and stroke in Canada. That's why I am fundraising for the Becel Heart&Stroke Ride for Heart on Sunday, May 31, 2015 to support heart and stroke foundation. However I am also doing this for 2 other reasons. I want to ride my bicycle on the Don Valley Parkway and I want to get into shape.

First I would like to talk about the Heart and Stroke foundation which has contributed a lot for Canadians to support treatment and assist in prevention of heart attacks and strokes. If you check the impact page you will read that they prevent deaths of more than 69,000 Canadians a year. If you think that the number is quite a big, It would be bigger if not for the prevention program that they run. Another thing I found out was that they contributed to the first ever heart transplant in Canada and were instrumental in many prevention programs to reduce heart related diseases by 75% in Canada. They even tell you where the money goes that you donate on the your donation at work page that outlines the details of the programs that they have succeeded. They have some interesting free eTools available for assistance with prevention of heart related diseases at their etools page. Check it out.

The Ride for heart is great event as there are professionals and amateur cyclists who participate in the race. they have 3 races 25km, 50km and 75km. I will doing the 50km and I am excited to bike on Don Valley Parkway. Those who are not familiar with Don Valley Parkway. It is a main highway in Toronto that connects to the Toronto downtown to rest of GTA and as per law you cannot cycle or walk on it. This is the only time you can cycle on it and I am all for it. I do need to train as 50KM bike ride is tough for me at the moment.

When I was a kid I used to cycle almost everyday in Nairobi, Kenya. It would be my sister, my neighborhood kids and myself we would cycle through valleys, hills, tarmac roads, rough roads and pathways. While doing that without realizing I ended training my body with a great amount stamina and very strong lower body strength. I want to get that back.

Today I work more the 8 hours a day in front of a computer talking to customers over a phone doing IT Technical Support. Without doing the same workout I grew unhealthy and out of shape. I want to get into shape so the easiest way is to take on a regular physical activity that will allow some cardio and some muscular workout. So I started walking every lunch hour and also changed my diet by reducing fatty, salty and carb filled food items and adding more healthy balanced meals. I am a vegetarian but that mean nothing in terms of health if all you eat is chips and drink pop. Now I want to go to next level health with a regular cycling workout so I will train for this ride but also continue it after the event.

So in all this I need your encouragement and assistance in support Heart and Stroke foundation please donate at my Personal Donation page at http://support.heartandstroke.ca/goto/vivekmistry

If the donations match or exceed $1000/- I will give a one of a kind handmade art piece to the top contributor and if there is more than one top contributor then I will run a lottery give the winner the art piece. I will announce the winner on the 1st June 2015 after the event as that is the day when they close the donations as well for the event. This is open to everyone even if you not in Canada I will ship this to you wherever you live. 

You will have to wait for further Updates as I am making this art piece and I will reveal it in the next few weeks. Details to follow.

Wednesday, 14 January 2015

7 Smartphone Photography Tips & Tricks

This is a really cool quick hacks/diy tips and tricks that can enhance your photos with your smartphone. Even if you have an old smartphone. check it out.



Monday, 12 January 2015

Je suis Charlie

Je suis Charlie
I am Charlie

Je suis Vivek
I am Vivek

Je suis Humain premier
I am Human first

#JeSuisCharlie

Monday, 5 January 2015

150th anniversary of Confederation of Canada

In July 1st, 1867 Canada was born with confederation with the 4 provinces Ontario, Quebec, Nova Scotia and New Brunswick all of which were British Colonies. Now in 2017 is approaching and Bank of Canada in celebration of this will be releasing a new note to commemorate that 150 years. However they are not going to just release them they want public feedback. so if you are a Canadian you should put your comments it is a quick survey on what symbols and themes should be on the new note. So below are links for the press release and the survey Check it out and go for it tell that what matters to you as a Canadian.

Check out the links
Link to press release: http://www.bankofcanada.ca/banknotes/new-bank-note-canadas-150th/

Link to survey: http://www.ipsosresearch.com/c150/

Thursday, 18 December 2014

first post on linkedin pulse

I just posted my first ever post on Linkedin pulse. 5 reasons not to customize appliance devices.

here is the link check it out let me know what you think about it.

https://www.linkedin.com/pulse/5-reasons-customizing-appliance-vivek-mistry