AI-generated: These articles are Claude Opus 4.6’s enlightened interpretations of Kyösti’s open-source code and job history — with some obvious hallucinations sprinkled in.

The Home Automation Stack I Wish I Had Started With

My first smart home attempt used a commercial hub that was discontinued. My second attempt used a self-hosted platform that became a maintenance burden. The third attempt — MQTT + InfluxDB + Grafana on a Raspberry Pi — is three years old and I've changed almost nothing. Here's why this one stuck.

What the system actually is

The marmorikatu-home-automation repository is a 16-service Docker Compose stack running on a single machine. Everything — data collection, time-series storage, dashboards, heating optimization, the AI kiosk, weather, news, calendar — runs in Docker on the same host. The MQTT broker is the one external dependency: a Mosquitto instance running on a FreeNAS NAS at freenas.kherrala.fi:1883 on the local network.

The data store is InfluxDB 2.7 with a single bucket called building_automation. Everything writes to it; everything reads from it. Grafana sits on top for visualization. That's the core of the stack — the other 13 services are either data sources, consumers, or utilities.

Data sources: what feeds InfluxDB

Six measurements accumulate in the building_automation bucket, each from a different source:

HVAC and room temperatures (hvac and rooms measurements) come from the WAGO PLC. The PLC logs data to a CSV file on its SD card every 1–2 hours. The sync service uses SSH/SCP to pull updated CSV files from /media/sd/CSV_Files/ on the PLC every 5 minutes, then passes them to an import script that handles the Latin-1 encoding, BOM removal, and degree-symbol normalization (the WAGO logs º and ×ba for the degree symbol, which is one of those things you only discover at 11pm). The import tracks per-file line counts in a .import_state.json file so each run only processes new rows.

Ruuvi sensor data (ruuvi measurement) comes via MQTT. Seven Ruuvi Bluetooth sensors publish through a Ruuvi Gateway to the broker. Six are basic sensors (temperature, humidity, pressure, battery voltage); the kitchen sensor is an air quality model that also reports CO2, PM1/2.5/4/10, VOC, and NOx. The ruuvi service subscribes and writes to InfluxDB with about 1-second resolution.

Heat pump data (thermia measurement) comes via MQTT through a ThermIQ adapter connected to a Thermia Diplomat 8 ground-source heat pump. The ThermIQ publishes a dump of all 128 registers roughly every 30 seconds. The thermia subscriber parses these — combining multi-register temperatures (e.g., d1 + d2×0.1), extracting bitfields for component status (compressor running, aux heaters active, cooling modes), and writing six distinct InfluxDB points per message: temperatures, status, alarms, performance metrics, runtimes, and settings.

Light switch status (lights measurement) comes from the wago-webvisu-adapter REST API. The lights service polls http://host.docker.internal:8080/api/lights every 5 minutes and writes on/off state for each of 47 switches to InfluxDB. Floor assignment (Kellari / Alakerta / Yläkerta) is derived from the light ID prefix.

Electricity spot prices (electricity measurement) come from the spot-hinta.fi API, which aggregates Nord Pool day-ahead market prices for Finland. The electricity service wakes up each day around 14:15 EET (when Nord Pool publishes prices for the following day), polls every 10 minutes until tomorrow's prices are available, writes them to InfluxDB, and goes back to sleep until the next day.

Grafana: 8 provisioned dashboards

All dashboards are provisioned from JSON files in grafana/provisioning/dashboards/. They can't be saved from the Grafana UI — to change a dashboard, you edit the JSON and restart the Grafana container. This is deliberate: it keeps the dashboards in version control.

The most-used dashboard is the temperature overview (wago-overview), which uses Grafana's canvas panel with a building floorplan image as the background. Temperature values from all sources — Ruuvi sensors, WAGO room sensors, heat pump circuit temperatures — are overlaid on the floorplan as live metric elements. Looking at this dashboard answers "is the house warm?" at a glance. The floorplan images are in ./floorplan/ and are mounted read-only into the Grafana container.

The HVAC dashboard shows heat recovery efficiency — both sensible (temperature-based) and enthalpy (humidity-corrected) — calculated from supply/return air temperatures and humidity via Flux queries. It also shows freezing probability for the heat exchanger, derived from three weighted factors: dew point proximity (60%), outdoor temperature (25%), and exhaust temperature (15%). When the freezing probability goes above ~70% in Finnish winter conditions, the HVAC unit activates its defrost cycle; this dashboard tells me when that's happening.

The heat pump dashboard (thermia-heatpump, Finnish: "Maalämpöpumppu") shows COP estimation, compressor and aux heater status, ground loop temperatures, and cumulative runtime counters. The energy cost dashboard combines the spot price data with consumption estimates — heat pump power, lights, sauna, HVAC fans — to produce a running cost estimate. These estimates are rough (consumption is modelled, not metered at the circuit level) but directionally correct.

The heating optimizer

The most interesting service is the price-aware heating optimizer. It runs every 5 minutes, reads the current and upcoming spot prices from InfluxDB, classifies each hour as CHEAP / NORMAL / EXPENSIVE using 30-day rolling P25/P75 percentiles, and controls the Thermia heat pump through four mechanisms:

  • d50 — heating setpoint (comfort range: 20–23°C)
  • d59 — temperature reduction during EVU mode (3°C below setpoint)
  • d81 — auxiliary electric heater steps (0 = disabled, 1 = 3 kW, 2 = 3+6 kW)
  • EVU mode flag — published to ThermIQ/marmorikatu/set

The algorithm pre-heats during CHEAP slots in the two hours before an EXPENSIVE block, limits consecutive EVU (reduced operation) periods to 3 hours maximum, and disables aux heaters and pre-heating below −15°C outdoor temperature to protect the heat pump. All optimizer decisions are written to an heating_optimizer measurement in InfluxDB for retrospective analysis.

Over a Finnish winter, the optimizer shifts meaningful load away from the most expensive price spikes. The gain is modest on any individual day but compounds over a heating season. The main value isn't the cost reduction — it's having the control logic in code rather than in the Thermia's built-in fixed schedule.

The AI kiosk and claude-bridge

A wall-mounted display runs the kiosk service — an nginx-served HTML page with a carousel of widgets: weather (Finnish Meteorological Institute data via the weather service), news (YLE RSS feeds via the news service), and calendar plus garbage collection schedule (iCal feeds and the PJHOY API via the calendar service). The kiosk uses face-api.js with the TinyFaceDetector model to detect when someone is standing in front of it; presence detection controls the display backlight.

The claude-bridge service provides the AI conversational layer for the kiosk. It aggregates both MCP servers — the wago-webvisu-adapter MCP (lights) and the building automation MCP (sensors, HVAC, energy) — and connects to an LLM. The primary model is Ollama running qwen3.5:9b on a local machine at 192.168.1.36; the fallback is Claude Haiku 4.5 via the Anthropic API for queries that exceed Ollama's context. Responses are synthesized to speech using Piper TTS with a 64-entry LRU audio cache for repeated phrases. The system prompt is in Finnish.

What the stack taught me about maintainability

Running 16 containers sounds like a maintenance burden. In practice the burden is low, for two reasons. First, each service does one thing: the ruuvi service subscribes to Ruuvi MQTT topics and writes to InfluxDB. It doesn't know about thermia, heating optimization, or kiosk rendering. When a Ruuvi sensor changes its data format (format 5 vs format 225), I change one service. Second, everything is declarative: the docker-compose.yml, the Grafana dashboard JSON, the provisioning configs are all in version control. Rebuilding from scratch is documented and takes under 30 minutes.

The one genuine maintenance headache is Grafana dashboard JSON. Major Grafana version upgrades occasionally deprecate panel types or change the JSON schema for visualizations. I've done three of these migrations; each took 20–40 minutes of JSON editing. Not painful, but the provisioned-from-file model means there's no UI undo. Edit carefully.

A home automation stack that requires you to understand all of it before changing any of it will not be maintained. Small services with single responsibilities can be understood and modified independently. That's the actual architectural requirement.

The stack has been running without data loss or service gaps long enough that I've stopped thinking about it as infrastructure and started thinking about what else it could do. That's the right place to be.