POMSKI is a Python-based live-coding environment for real-time MIDI composition.
Write and rewrite patterns while the music plays — no stopping, no rendering, no waiting.
Windows users: Get the POMSKI setup installer here: Itch.IO
Mac users: You can build from source by cloning the package from here.
Most music software asks you to build first, then listen. You place notes on a grid, render an audio file, press play, realise something is wrong, go back to the grid. The feedback loop is slow, and the act of composition is disconnected from the act of listening.
POMSKI inverts this. You write a few lines of Python, press Shift+Enter, and the change
takes effect on the very next bar — while everything else keeps playing. The music is always
running. You are always inside it.
Live coding is a practice borrowed from computer science performance art (the Algorave scene, SuperCollider, TidalCycles). Its central insight is that code is notation — a precise, expressive language for describing musical pattern. Unlike a piano roll, code can describe not just a specific sequence of notes but the rules that generate a sequence: mathematical systems, probability, chaos, feedback loops.
In a DAW you compose a performance. In POMSKI you perform a composition. The score and the concert are the same event.
http://localhost:8080 lets you see patterns, mute channels, monitor signals, and run REPL commands without touching the terminal.You need to understand about 8 Python concepts. All of them are explained in the Python Primer section. If you can write a grocery list, you can write a POMSKI pattern.
If using LoopBe as your virtual MIDI port, its feedback protection will silently mute output if it detects a loop. Check the LoopBe tray icon if MIDI goes silent unexpectedly.
# From the project directory: python examples/pomski_template.py
You will see startup messages. Then open your browser to http://localhost:8080.
Shift+Enter sends the current block. Ctrl+Shift+Enter sends the whole editor. Ctrl+↑/↓ scrolls command history.Copy this into the editor and press Shift+Enter. You should hear a metronome-like kick on MIDI channel 1.
@composition.pattern(channel=0, length=4) def ch1(p): p.note(60, beat=0)
You need exactly eight Python concepts to use POMSKI. Nothing else is required.
Store a value and reuse it by name.
root = 60 # middle C speed = 0.25 # quarter note scale = "dorian"
A named block of code that runs when called.
def greet(): print("hello") greet() # runs it
Wraps a function with extra behaviour. This is how POMSKI registers patterns.
@composition.pattern(channel=0) def ch1(p): ... # your pattern here
Repeat an action a set number of times.
for i in range(4): # runs 4 times: i=0,1,2,3 p.note(60, beat=i)
An ordered collection of values. Access items by index (starting at 0).
notes = [60, 63, 67, 70] notes[0] # → 60 (first) notes[2] # → 67 (third)
Returns the remainder after division. Essential for cycling through lists.
7 % 4 # → 3 8 % 4 # → 0 (wraps back) # Cycle through a list: notes[p.cycle % len(notes)]
Run code only when a condition is true.
if p.cycle % 4 == 0: # runs every 4 loops p.note(72, beat=0)
Loop through a list and get both the index and the value.
pitches = [60, 64, 67] for i, pitch in enumerate(pitches): p.note(pitch, beat=i)
Inside every pattern function, p is your pattern builder. It is passed in automatically —
you never create it yourself. Every method you call on it (p.note(), p.euclidean(), etc.)
adds events to the current loop cycle of that pattern.
p has a few useful read-only attributes:
| Attribute | Type | Description |
|---|---|---|
| p.cycle | int | How many times this pattern has looped since it was defined. Starts at 0. Use it for evolving patterns over time. |
| p.bar | int | The global bar number at the time this pattern fired. |
| p.rng | Random | A seeded random number generator. Use p.rng.random(), p.rng.choice(list), p.rng.randint(a,b). Reproducible — same seed every time the pattern restarts. |
| p.section | SectionInfo | Current form section (name, bar within section, total bars). None if no form is defined. |
| p.data | dict | Direct reference to composition.data. Use it to read Live values, signals, or cross-pattern state. |
p.cycle is a number. Do not write p.cycle([60, 63]) —
that will throw a TypeError. To cycle through a list, write my_list[p.cycle % len(my_list)].
POMSKI uses raw MIDI pitch numbers (0–127). Middle C is 60. Each semitone is +1 or −1. An octave is 12 semitones. Common reference points:
| Note | MIDI# | Note | MIDI# | Note | MIDI# |
|---|---|---|---|---|---|
| C3 | 48 | C4 (middle C) | 60 | C5 | 72 |
| D3 | 50 | D4 | 62 | G4 | 67 |
| E3 | 52 | E4 | 64 | A4 | 69 |
| F3 | 53 | F4 | 65 | Bb4 | 70 |
Everything lives inside a composition. It holds the clock, the key, the harmony, and all
the running patterns. You start it at the bottom of your script with composition.play(),
which blocks forever and runs the event loop.
composition = subsequence.Composition(key="C", bpm=120) composition.harmony(style="functional_major", cycle_beats=4) composition.web_ui() # start the browser dashboard composition.live() # start the REPL server on port 5555 composition.play() # always last — starts the clock
The template defines 16 slots — ch1 through ch16 — one per MIDI channel.
Each slot is a silent placeholder. To make a slot play, you redefine it by name
from the browser editor. The scheduler detects the same function name and swaps the new pattern
in at the next bar boundary without interrupting playback.
POMSKI channels are 0-indexed. MIDI channel 1 = channel=0,
channel 10 (drums) = channel=9. This is a common source of confusion — the template
assigns slots in order, so ch1 is channel=0, ch10 is
channel=9.
If you use a new function name (one not already in the 16 slots), POMSKI automatically steals the first empty slot and puts your pattern there. You can use any name you like.
length is in beats. length=4 is one 4/4 bar.
length=8 is two bars. length=0.5 is a half-bar (2 beats).
Patterns of different lengths run simultaneously and loop independently — a 3-beat and a 4-beat
pattern will phase against each other like a polyrhythm.
When you send a redefined pattern from the browser, the live server executes the decorator. If a pattern with that name is already running, POMSKI replaces its builder function on the existing slot at the next bar boundary. The channel, MIDI routing, and sync all remain intact. Nothing restarts. Nothing skips.
The simplest possible pattern. One note on beat 0, looping every 4 beats.
@composition.pattern(channel=0, length=4) def ch1(p): p.note(60, beat=0)
pat is pre-loaded as an alias for composition.pattern.
The channel is the first positional argument and length defaults to 4,
so @pat(0) is equivalent to @composition.pattern(channel=0, length=4).
@pat(0) # channel 0, length 4 def ch1(p): p.note(60, beat=0) @pat(0, 8) # channel 0, length 8 beats def ch1(p): p.note(60, beat=0)
Beat positions are in beats from the start of the pattern. Beat 0 = the downbeat. Beat 0.5 = the "and" of beat 1. Beat 1 = beat 2 of the bar.
@composition.pattern(channel=0, length=4) def ch1(p): p.note(60, beat=0, velocity=100, duration=0.4) p.note(64, beat=1, velocity=80, duration=0.4) p.note(67, beat=2, velocity=80, duration=0.4) p.note(60, beat=3, velocity=60, duration=0.4)
Drums go on channel=9 (MIDI channel 10). Use drum_note_map=gm_drums.GM_DRUM_MAP
to use drum names instead of numbers.
@composition.pattern(channel=9, length=4, drum_note_map=gm_drums.GM_DRUM_MAP) def ch10(p): # hit_steps places hits at 16th-note grid positions (0–15) p.hit_steps("kick_1", [0, 3, 8, 12], velocity=110) p.hit_steps("snare_1", [4, 12], velocity=100) p.hit_steps("hi_hat_closed", range(16), velocity=70) p.velocity_shape(low=55, high=90) # humanise velocity
Write a melody in Sonic Pi style — space-separated pitch numbers with _ for rests.
@composition.pattern(channel=1, length=4) def ch2(p): p.seq("60 _ 63 _ 65 67 _ 63", velocity=80) # plays: C4, rest, Eb4, rest, F4, G4, rest, Eb4 # each token is one eighth note (length/8 = 0.5 beats)
To silence a slot without stopping playback, click the × button on that slot in the Patterns tab, or send a blank definition:
@composition.pattern(channel=0, length=4) def ch1(p): pass # empty body = silence
| Method | Description |
|---|---|
| p.note(pitch, beat, velocity, duration) | Place a single note. pitch: MIDI 0–127. beat: position in beats from start of pattern. velocity: 0–127 (default 100). duration: length in beats (default 0.5). |
| p.hit_steps(pitch, steps, velocity) | Place hits at 16th-note grid positions. steps is a list of indices 0–15. With drum_note_map, pitch can be a drum name string like "kick_1". |
| p.sequence(steps, pitches, velocities, durations) | Pair a list of grid positions with a list of pitches. Positions are 16th-note indices; pitch list cycles if shorter than steps list. Parameters are velocities and durations (plural) — pass a single int/float or a list. |
| p.seq(notation, velocity) | Sonic Pi style: space-separated numbers, _ for rests. All tokens are equal duration (pattern length / token count). Example: "60 _ 62 64". Note: p.seq() has no duration parameter — note length is set automatically. |
| Method | Description |
|---|---|
| p.euclidean(pitch, pulses, velocity, duration) | Distribute pulses hits as evenly as possible across the pattern's grid (Björklund algorithm). The grid size is derived automatically from the pattern's length. Classic for polyrhythmic layering. |
| p.bresenham(pitch, pulses, velocity, duration) | Alternate even distribution using the Bresenham line algorithm — creates slightly different accent patterns than Euclidean. Grid size is derived automatically from pattern length. |
| p.hit_steps() with range() | For quick straight 8ths or 16ths: range(0, 16, 2) = every other step (8th notes). range(16) = all 16 steps. |
| Method | Description |
|---|---|
| p.randomize(timing, velocity) | Add human-feel micro-timing and velocity variation. timing=0.02 means ±2% of a beat. velocity=0.05 means ±5% velocity. |
| p.dropout(probability) | Randomly remove notes. p.dropout(0.2) = 20% chance each note is silenced on this cycle. Adds natural variation without changing the underlying pattern. |
| p.quantize(key, mode) | Snap all notes to the nearest pitch in a scale. Key is a note name ("C", "F#", "Bb"). Mode is a scale name ("ionian", "dorian", "minor", "harmonic_minor", etc.). |
| p.quantize_m21(key, scale_name) | Like p.quantize() but unlocks Music21's full scale library. Use for ragas, octatonic, whole-tone, and other exotic scales. Requires pip install music21. |
| p.transpose(semitones) | Shift all notes up or down by a fixed number of semitones. Combine with p.cycle for automatic transposition. |
| p.shift(steps) | Move all notes forward or backward in time by steps 16th-note positions. Useful for polyrhythmic offset against other patterns. |
| p.reverse() | Flip the pattern backwards in time. |
| p.velocity_shape(low, high) | Re-scale all velocities to fit within a low–high range, preserving relative dynamics. Great for humanising step-sequenced drum patterns. |
| p.thin(pitch, strategy, amount) | Remove notes for a specific pitch or drum name based on rhythmic position. amount=0.5 removes roughly half; amount=1.0 removes all qualifying notes. Strategies: "strength" (default — weakest positions drop first), "sixteenths", "offbeat", "e_and_a", "downbeat", "upbeat", "uniform". Unlike p.dropout(), targets one instrument and respects rhythmic position. |
| Method | Description |
|---|---|
| p.from_midi(filepath, track, channel, pitch_offset, velocity, duration) | Read notes from a .mid file and place them in the pattern. Timing is normalised to beat 0 and wrapped to pattern length. track is the zero-based track index. pitch_offset transposes all loaded notes. |
Here is a complete step-by-step process for building a live track in POMSKI, starting from silence. Each step is sent as a separate block from the editor — the previous patterns keep playing while you add new ones.
Start with rhythm. Keep it simple — kick and hi-hat first. Add the snare in a moment.
@composition.pattern(channel=9, length=4, drum_note_map=gm_drums.GM_DRUM_MAP) def ch10(p): p.hit_steps("kick_1", [0, 8], velocity=110) p.hit_steps("hi_hat_closed", range(16), velocity=65) p.velocity_shape(low=50, high=80)
A repeating root note with a rhythmic pattern. Low octave, short duration for punch.
@composition.pattern(channel=1, length=4) def ch2(p): p.hit_steps(36, [0, 3, 8, 11, 14], velocity=100) # C2
A simple ascending figure using p.seq(). The _ rests give it breathing room.
@composition.pattern(channel=0, length=4) def ch1(p): p.seq("60 _ 63 65 _ 67 _ 65", velocity=80) p.randomize(timing=0.01, velocity=0.04)
Redefine ch10 with more elements. The existing kick and hi-hat keep playing until the bar ends, then this replaces them.
@composition.pattern(channel=9, length=4, drum_note_map=gm_drums.GM_DRUM_MAP) def ch10(p): p.hit_steps("kick_1", [0, 3, 8, 12], velocity=110) p.hit_steps("snare_1", [4, 12], velocity=100) p.hit_steps("hi_hat_closed", range(16), velocity=70) p.velocity_shape(low=55, high=90)
Make the melody change every 4 loops — play one phrase for 4 bars, then another.
@composition.pattern(channel=0, length=4) def ch1(p): phrases = [ "60 _ 63 65 _ 67 _ 65", "67 _ 65 63 _ 60 _ 63", "65 67 _ 70 _ 67 65 _", ] phrase = phrases[(p.cycle // 4) % len(phrases)] p.seq(phrase, velocity=80) p.randomize(timing=0.01)
Use the Patterns tab mute buttons, or type these into the Quick Command box at the bottom of the log panel:
composition.mute("ch10") # silence drums composition.unmute("ch10") # bring them back
p.cycle increments every time the pattern loops. Use integer division (//) to
create larger structural boundaries — every N loops, something changes.
@composition.pattern(channel=0, length=4) def ch1(p): # Changes key every 8 loops (every 8 bars) keys = ["C", "F", "G", "Am"] key = keys[(p.cycle // 8) % len(keys)] # Every other loop, add an extra note if p.cycle % 2 == 0: p.note(72, beat=3.5) p.seq("60 _ 63 65", velocity=80) p.quantize(key, "dorian")
p.rng is a seeded random number generator local to your pattern.
Use it instead of Python's random module — it ensures reproducible
behaviour if you restart a pattern with the same name.
@composition.pattern(channel=0, length=4) def ch1(p): scale = [60, 62, 63, 65, 67, 70, 72] for i in range(8): if p.rng.random() > 0.3: # 70% chance of playing pitch = p.rng.choice(scale) p.note(pitch, beat=i * 0.5)
Apply p.dropout() after placing notes for probabilistic thinning — the underlying
pattern stays intact but notes randomly vanish each cycle, keeping things interesting.
@composition.pattern(channel=0, length=4) def ch1(p): p.seq("60 62 63 65 67 65 63 62", velocity=80) p.dropout(0.25) # 25% of notes vanish each cycle
POMSKI has a signal system for continuously varying values — LFOs, envelopes, Perlin noise curves. Signals appear as scrolling graphs in the Signals tab of the UI and can drive any numeric parameter inside a pattern.
Create a named LFO with composition.conductor.lfo(), then read its current value
inside any pattern with p.signal(name). The value (0.0–1.0 by default) can drive
velocity, pitch offset, or any other numeric parameter.
# Create an LFO (send this once from the editor or add it to the template) # cycle_beats = how many beats for one full sweep (16 = 4 bars at 4/4) composition.conductor.lfo("vol_lfo", cycle_beats=16, min_val=0.2, max_val=1.0)
# Read it inside a pattern with p.signal() @composition.pattern(channel=0, length=4) def ch1(p): vol = p.signal("vol_lfo") # returns 0.2–1.0, sweeping over 16 beats for i in range(8): p.note(60, beat=i * 0.5, velocity=int(vol * 100))
perlin_1d generates smooth, organic noise — much more musical than pure random.
Import it and write values to composition.data to visualise them.
from subsequence.sequence_utils import perlin_1d @composition.pattern(channel=0, length=4) def ch1(p): # perlin_1d(x, seed) → float 0.0–1.0 # Multiplying p.cycle by a small number controls how fast it moves noise = perlin_1d(p.cycle * 0.07, seed=42) # Write to composition.data — shows up in Signals tab composition.data["pitch_wander"] = noise # Map 0–1 noise to a pitch range base_pitch = 48 + int(noise * 24) # C3 to C5 for i in range(8): p.note(base_pitch + i, beat=i * 0.5) p.quantize("C", "dorian")
p.data is the same dictionary as composition.data. One pattern can
write a value, another can read it — enabling the kind of interdependence that makes a
generative system feel alive.
from subsequence.sequence_utils import perlin_1d # Pattern 1 writes its density to data @composition.pattern(channel=0, length=4) def ch1(p): density = perlin_1d(p.cycle * 0.05, seed=1) p.data["density"] = density p.euclidean(60, pulses=int(2 + density * 10)) # Pattern 2 reads that density to affect its own behaviour @composition.pattern(channel=1, length=4) def ch2(p): density = p.data.get("density", 0.5) p.seq("48 _ 51 _ 53 _ 55 _", velocity=80) p.dropout(1.0 - density) # sparser when ch1 is dense
Click the ✕ button next to any signal in the Signals tab to remove it from the display and the data dictionary.
These methods generate entire note sequences from mathematical systems. They are all available
directly as p. methods — no imports needed.
The Euclidean algorithm distributes N hits across M steps as evenly as possible. This produces the rhythmic patterns found in West African drumming, Middle Eastern music, and countless other traditions — because maximum evenness is also maximum musicality.
# 5 evenly-spaced hits across the pattern — the "cinquillo" rhythm # (steps are derived automatically from pattern length) p.euclidean(60, pulses=5) # Stack multiple voices for polyrhythm p.euclidean(36, pulses=3) # kick p.euclidean(40, pulses=5) # snare p.euclidean(42, pulses=7) # hi-hat # Use p.shift() to offset a voice's start point for variation p.euclidean(60, pulses=5) p.shift(3) # shift all notes forward by 3 grid steps
The Lorenz attractor is a set of differential equations that produces chaotic (never exactly repeating) but aesthetically coherent trajectories. POMSKI maps its x-axis values to pitch.
@composition.pattern(channel=0, length=4) def ch1(p): p.lorenz( steps=16, pitch_range=(48, 72), # C3 to C5 velocity=80, s=10.0, r=28.0, b=2.667 # classic parameters ) p.quantize("C", "minor")
Each note is a small random step away from the previous — like a melody that wanders without leaping wildly. Highly musical because it respects the statistical shape of real melodies.
@composition.pattern(channel=0, length=4) def ch1(p): p.brownian( start=60, # starting pitch steps=16, step_size=2, # max semitones per step pitch_range=(48, 72) ) p.quantize("D", "dorian")
Conway's Game of Life is a 2D cellular automaton — cells live or die by neighbour rules. POMSKI runs N generations and reads one row: live cells trigger note hits. The result is rhythm that feels like it has internal logic — because it does.
@composition.pattern(channel=0, length=4) def ch1(p): p.game_of_life( pitch=60, cols=16, rows=4, generations=p.cycle % 32 + 1, # advance each loop row=0 )
The logistic map (x → r·x·(1−x)) is one of the simplest equations that produces
chaos. At r > 3.57 it becomes chaotic — pitch and velocity diverge unpredictably
from tiny differences in starting conditions.
p.logistic(steps=16, r=3.9, pitch_range=(48, 72))
The Gray-Scott equations model chemical reactions creating Turing-like patterns — spots and stripes that emerge from uniform initial conditions. POMSKI maps the intensity field to note velocities, creating complex dynamic textures on a fixed pitch.
p.gray_scott(pitch=60, n=16, f=0.055, k=0.062)
Distribute notes at irrational intervals based on φ (1.618…). Because φ is the "most irrational" number, the resulting rhythm never exactly repeats — but it fills the bar with uncanny balance.
p.golden_ratio(pitch=67, count=8, velocity=75)
Define weighted transitions between named states — each state "decides" what comes next based
on relative weights. You provide two things: a transitions dict mapping state names
to a list of (next_state, weight) pairs, and a pitch_map dict mapping
state names to MIDI note numbers. Produces melodies that feel stylistically consistent without being repetitive.
@composition.pattern(channel=0, length=4) def ch1(p): # transitions: each state maps to a list of (next_state, weight) pairs # pitch_map: maps each state name to a MIDI note number p.markov( transitions={ "root": [("3rd", 4), ("5th", 2), ("root", 1)], "3rd": [("5th", 3), ("root", 2), ("7th", 1)], "5th": [("root", 3), ("3rd", 2)], "7th": [("root", 4), ("5th", 2)], }, pitch_map={"root": 60, "3rd": 63, "5th": 67, "7th": 70}, velocity=80, step=0.25, # 16th note spacing )
p.quantize_m21() gives access to hundreds of scale types. The scale snapping logic
is identical to p.quantize() — the only difference is the source of pitch classes.
Requires pip install music21.
# Indian raga p.brownian(start=60, steps=16) p.quantize_m21("D", "RagAsawari") # Octatonic (diminished) scale p.lorenz(steps=16, pitch_range=(48, 72)) p.quantize_m21("C", "OctatonicScale") # Whole-tone p.brownian(start=60, steps=16) p.quantize_m21("C", "WholeToneScale")
POMSKI includes a bridge to Ableton Live via the AbletonOSC remote script. Once connected, you can fire clips, read track volumes, and automate parameters — all from patterns and the REPL.
AbletonOSC connected| Command | Description |
|---|---|
| live.clip_play(track, clip) | Fire a clip. Both indices are 0-based. |
| live.scene_play(scene) | Fire a scene (launches all clips in that row). |
| live.track_volume(track, value) | Set track volume. value is 0.0–1.0. |
| live.track_mute(track, True/False) | Mute or unmute a track. |
| live.device_param(track, device, param, value) | Set a device parameter (0.0–1.0). |
| live.set_tempo(bpm) | Change Live's tempo. |
| live.watch("track/0/volume") | Subscribe to a Live property. Its value is pushed to composition.data["live_track_0_volume"] continuously. |
| live.tracks | List of track names in the current Live set. |
| live.connected | True if the Live bridge is active. |
# Watch a Live parameter (run once from REPL) live.watch("track/0/volume") # Now use it inside any pattern @composition.pattern(channel=0, length=4) def ch1(p): vol = p.data.get("live_track_0_volume", 0.8) vel = int(vol * 127) p.seq("60 _ 63 65 _ 67", velocity=vel)
Think of a live POMSKI set like a DJ set, but instead of swapping tracks you are rewriting the rules of the music in real time. A practical workflow:
| Shortcut | Action |
|---|---|
| Shift+Enter | Send the current code block (detected by double newlines above the cursor) |
| Ctrl+↑ / Ctrl+↓ | Navigate command history (last 200 sent blocks) |
# Drop out notes probabilistically — density controlled by Perlin noise @composition.pattern(channel=0, length=4) def ch1(p): from subsequence.sequence_utils import perlin_1d sparsity = perlin_1d(p.cycle * 0.04, seed=7) p.data["sparsity"] = sparsity p.seq("60 63 65 67 65 63 67 70", velocity=80) p.dropout(sparsity * 0.6) # max 60% dropout p.quantize("C", "dorian")
# Modulate key every 16 bars using p.bar @composition.pattern(channel=0, length=4) def ch1(p): modulation = (p.bar // 16) % 3 # 0, 1, 2 cycling offset = [0, 5, 7][modulation] # root, fourth, fifth p.seq("60 63 65 67", velocity=85) p.transpose(offset)
A length=128 pattern plays for 32 bars before repeating. Inside it, p.cycle
increments once every 32 bars — so p.cycle // 1 gives you 32-bar sections.
Use very long patterns for slow, geological-timescale evolution.
@composition.pattern(channel=0, length=128) def ch1(p): # Play 128 notes spread over 32 bars — each note is 1 bar apart for i in range(32): pitch = 48 + i % 12 p.note(pitch, beat=i * 4, duration=3.5) p.quantize("C", "minor")
The editor supports multiple named buffer tabs. Add a tab with the + button,
double-click a tab label to rename it, and click × to remove one.
All buffer contents and names persist across browser reloads and restarts — your code
is always there when you reopen the UI.
A practical use: keep a different song section in each buffer — intro in tab 1, chorus
in tab 2, bridge in tab 3. Switch between them with a click and send any block with
Shift+Enter.
If a pattern raises an exception — a typo, a missing import, a bad argument — the full traceback appears as a red error message in the log panel. The pattern continues cycling silently (playing nothing) until you fix and re-send it. You do not need to watch the terminal.
The best live coding performances have silences, mistakes, and recovery. An error in the log panel is not embarrassing — it is evidence that something real is happening. Leave the log visible on screen if you are performing for an audience.
POMSKI includes a built-in feeds object that lets you poll any HTTP API
at runtime and pipe the results directly into composition.data, where
patterns can read them on every cycle. No template editing required — just type into
the command box and the feed starts immediately.
Each feed is an asyncio task running on POMSKI's event loop. It fetches a URL every
N seconds, applies your extract function to the JSON response,
and writes the result to composition.data["feed_<key>"]. Patterns
are rebuilt every cycle, so they always see the latest value with no extra wiring.
Call feeds.add() from the command box, the main editor, or a REPL client.
The feed starts polling immediately and persists until you stop it or restart POMSKI.
# Poll the ISS position API every 5 seconds feeds.add( "iss", "http://api.open-notify.org/iss-now.json", interval=5, extract=lambda r: float(r["iss_position"]["latitude"]) )
The value lands at composition.data["feed_iss"] after the first fetch.
Until then, .get("feed_iss", default) returns your default — always
provide one so patterns play something sensible before data arrives.
@composition.pattern(channel=0, length=4) def ch1(p): # Latitude −90 to +90 → MIDI pitch 48–84 lat = composition.data.get("feed_iss", 0) pitch = int(48 + (float(lat) + 90) / 180 * 36) p.note(pitch, beat=0)
| Call | What it does |
|---|---|
feeds.add(key, url, interval, extract, headers, method, body) |
Start (or restart) a named polling feed. extract is a callable that receives the parsed JSON and returns the value to store. Omit it to store the full response dict. |
feeds.stop("key") |
Cancel and remove a feed by name. |
feeds.stop_all() |
Cancel every running feed. |
feeds |
Print active feeds and their current values (type this bare in the REPL). |
composition.data.get("feed_key", default) |
Read the latest value inside a pattern. Always supply a default. |
Raw API numbers rarely map neatly to MIDI ranges. A simple linear rescale covers most cases:
def scale(val, in_lo, in_hi, out_lo, out_hi): return out_lo + (val - in_lo) / (in_hi - in_lo) * (out_hi - out_lo) @composition.pattern(channel=0, length=4) def ch1(p): # A stock price (say 100–300) mapped to pitch and velocity price = composition.data.get("feed_stock", 200) pitch = int(max(36, min(84, scale(price, 100, 300, 36, 84)))) velocity = int(max(40, min(120, scale(price, 100, 300, 40, 120)))) p.seq("x . x .", pitch=pitch, velocity=velocity)
# GET with an Authorization header feeds.add( "gh_stars", "https://api.github.com/repos/python/cpython", interval=60, headers={"Accept": "application/vnd.github+json"}, extract=lambda r: r["stargazers_count"] ) # POST with a JSON body feeds.add( "sensor", "https://my-api.example.com/query", interval=2, method="POST", body={"sensor_id": "A1", "field": "temperature"}, extract=lambda r: r["value"] )
Calling feeds.add() with the same key replaces the existing feed — the old
polling task is cancelled and a new one starts immediately. This lets you swap data
sources mid-performance without touching any pattern code:
# First set: ISS latitude feeds.add("signal", "http://api.open-notify.org/iss-now.json", extract=lambda r: float(r["iss_position"]["latitude"])) # Later in the set: swap to a different source, same key feeds.add("signal", "https://api.open-meteo.com/v1/forecast?latitude=51.5&longitude=-0.1¤t=temperature_2m", extract=lambda r: r["current"]["temperature_2m"]) # Pattern code never changes — it always reads "feed_signal"
DataFeeds uses Python's built-in urllib — no pip install
required. Any public JSON API reachable from your machine will work.
POMSKI bundles music21 — MIT's
computational music library — and exposes it through a single pattern method:
p.quantize_m21(key, scale_name). Call it after placing notes to snap every
pitch to the nearest degree of any music21 scale class.
The built-in p.quantize(key, mode) covers the nine Western church modes
and nothing else. p.quantize_m21 extends this to dozens of scales — ragas,
whole-tone, octatonic, Xenakis sieve, and more — using music21's complete pitch theory
engine under the hood.
@composition.pattern(channel=0, length=4) def ch1(p): # Place notes first, then quantise for i in range(16): p.note(60 + i, beat=i * 0.25) p.quantize_m21("A", "MelodicMinorScale")
quantize_m21 is a transformation — like transpose() or
dropout(), it modifies whatever notes are already in the pattern.
Place your notes first, then call it at the end of the builder function.
scale_name)scala_name)
Pass scala_name="filename.scl" instead of scale_name to access
music21's bundled Scala archive — 3,935 tuning files covering mbira, gamelan,
Pythagorean, just intonation, historical European temperaments, and hundreds of
other systems. scala_name takes precedence over scale_name
when both are given.
@pat(0) def ch1(p): p.brownian(start=60, steps=16) p.quantize_m21("C", scala_name="mbira_banda.scl") @pat(1) def ch2(p): p.arpeggio([60, 62, 64, 65, 67], step=0.25) p.quantize_m21("C", scala_name="pyth_12.scl") # Pythagorean 12-tone
Some useful Scala files from the archive:
| File | Tuning system |
|---|---|
mbira_banda.scl | Mbira dza vadzimu — traditional Shona tuning |
mbira_gondo.scl, mbira_zimb.scl | Other mbira regional variants |
gamelan_udan.scl | Gamelan pélog/sléndro approximation |
pyth_12.scl | Pythagorean 12-tone tuning |
harm8_1.scl | 8th harmonic partial series |
| Scale class | Character |
|---|---|
MajorScale | Standard major (Ionian) |
MinorScale | Natural minor (Aeolian) |
HarmonicMinorScale | Minor with raised 7th — exotic cadences |
MelodicMinorScale | Ascending form — raised 6th & 7th; jazz minor |
DorianScale | Minor with raised 6th — modal jazz foundation |
PhrygianScale | Minor with flattened 2nd — Spanish / flamenco feel |
LydianScale | Major with raised 4th — dreamy, floating quality |
MixolydianScale | Major with flattened 7th — rock & blues staple |
LocrianScale | Diminished feel — flattened 2nd & 5th |
OctatonicScale | 8-note diminished scale — alternating whole/half steps |
WholeToneScale | 6-note scale — all whole steps; Debussy-like ambiguity |
RagAsawari | North Indian raga (Asavari) — melancholic |
RagMarwa | North Indian raga (Marwa) — tense, unsettled |
SieveScale | Xenakis-style interval sieve — microtonal approximation |
quantize_m21 works with any note-placement method — euclidean rhythms, random
walks, Lorenz attractors, Markov chains. Generate raw pitches algorithmically, then use
the scale to impose musical structure.
# Euclidean rhythm + octatonic scale @composition.pattern(channel=1, length=4) def ch2(p): p.euclidean(60, pulses=5) p.quantize_m21("C", "OctatonicScale") p.randomize(timing=0.02)
# Random walk snapped to Phrygian — raw chromatic walk becomes coherent @composition.pattern(channel=0, length=4) def ch1(p): import random note = 60 for i in range(16): p.note(max(36, min(84, note)), beat=i * 0.25) note += random.choice([-2, -1, 0, 1, 2]) p.quantize_m21("E", "PhrygianScale")
# Raga arpeggio — North Indian flavour @composition.pattern(channel=2, length=4) def ch3(p): p.arpeggio([60, 62, 63, 65, 67, 68, 71], step=0.25) p.quantize_m21("C", "RagAsawari")
quantize_m21 snaps pitches to the nearest MIDI semitone. For tuning systems
that sit between semitones — like most mbira and gamelan scales — the pitches are
approximated. p.microtuning() goes further: it snaps the MIDI note
and injects a MIDI pitch bend event at each note onset to correct the remaining
cent deviation to exact intonation.
@pat(0) def ch1(p): p.brownian(start=60, steps=16) p.microtuning("C", "mbira_banda.scl") # true mbira intonation
The bend_range parameter (default 2.0 semitones) must match the
pitch bend range configured on the receiving synth. Most synths default to ±2 semitones,
but for tunings with large deviations (e.g. gamelan, which can be ±50 cents or more)
a wider range gives finer resolution:
# Synth set to ±12 semitone bend range: @pat(1) def ch2(p): p.euclidean(60, pulses=5) p.microtuning("C", "gamelan_udan.scl", bend_range=12)
All notes on the same MIDI channel share one pitch wheel. microtuning
works best on monophonic lines. For chords, use a separate channel per voice,
or use quantize_m21 with scala_name for semitone-level
approximation without bend.
Because the scale is evaluated fresh every pattern cycle, you can modulate it from the command box by redefining the pattern — no restart needed.
# Switch from dorian to harmonic minor on the fly @composition.pattern(channel=0, length=4) def ch1(p): for i in range(16): p.note(60 + i, beat=i * 0.25) p.quantize_m21("D", "HarmonicMinorScale") # ← change this line and re-run
music21 is bundled in the POMSKI Windows installer. If you are running from source,
install it with pip install music21. If music21 is missing, calling
quantize_m21 raises an ImportError that appears as a red
error message in the POMSKI log panel.