You upload a contract to your shared drive. Three days later, someone asks if it's been reviewed.
The file sat there. No one knew. Your team checks the folder manually once a day - if they remember.
Meanwhile, your competitor's system noticed the upload instantly, routed it for review, and sent a reminder after 24 hours.
The difference isn't sophistication. It's whether your system knows to look.
LAYER 1 - Listeners watch your systems so humans don't have to.
A listener is a piece of code that monitors something continuously. A file watcher checks a folder every few seconds. A database listener subscribes to change events. A polling service calls an API repeatedly to see what's new.
The key insight: listeners convert passive systems into active ones. Your file storage doesn't tell you when something changes. But a listener watching that storage will. Your database doesn't push updates. But a change data capture stream will.
Listeners are how you make dumb systems smart. Instead of checking manually, you set up something that checks for you - and only bothers you when there's something worth knowing.
Listeners solve a fundamental problem: how do you know something changed when the source won't tell you?
Define what you're watching (a folder, a table, an API). Define how often to check (every second, every minute, on events). Define what to do when you see a change (trigger a workflow, send a notification, update a record). The listener handles the watching; you handle the reaction.
Upload some files without the listener. Then turn it on and watch the difference.
No files yet. Upload some!
Listener is off
Turn it on to see automatic detection
Check repeatedly at intervals
Every 30 seconds, call the API and compare the response to what you saw before. Simple and universal - works with any system that has a read endpoint. But you're burning API calls even when nothing changes.
Get notified when changes happen
Subscribe to a webhook, message queue, or event stream. The source system tells you immediately when something changes. Zero wasted calls. But the source has to support it.
Stream database changes directly
Connect to the database's transaction log and see every insert, update, and delete as it happens. You get the before and after state. Works even when you can't modify the application.
A sales rep uploads a contract to the shared drive at 2pm. Without a listener, it sits there until someone remembers to check. With this flow, the system detects the upload instantly, extracts the key terms, routes it to legal review, and sets a reminder - all before the sales rep closes their browser tab.
Hover over any component to see what it does and why it's neededTap any component to see what it does and why it's needed
Animated lines show direct connections · Hover for detailsTap for details · Click to learn more
You set your poller to check every second because you want 'real-time' updates. Now you're hitting rate limits, your API costs tripled, and you're getting blocked. The system you're watching flags you as abusive.
Instead: Start with longer intervals (30-60 seconds). Only decrease if you genuinely need faster detection AND the source can handle it.
You process change events as they arrive. But the network delivered event 3 before event 2. Now your data is corrupted because you applied the older change on top of the newer one.
Instead: Include sequence numbers or timestamps. Check for ordering. Buffer and sort if needed.
Your listener crashed for 10 minutes. When it came back, it started from 'now' instead of where it left off. Those 10 minutes of changes are gone forever. Nobody noticed until the data was wrong.
Instead: Track your last processed position. On restart, resume from there. Do periodic reconciliation to catch drift.
You've learned how to detect changes automatically. The natural next step is understanding what to do with that data once you've captured it.