Cable TV Vacation Hold Project
A major cable company was losing customers — not because they didn't want the service, but because the vacation hold process was so complex that users couldn't navigate it and simply cancelled instead. A rolling 365-day window governed how many holds a customer could take, for how long, and under what conditions — rules so convoluted that even agents struggled to apply them consistently.
I designed a solution that made use of a very Celtic concept: the value of "betwixt and between": neither fully this nor that, but some of each and something else altogether — and this situation called for the same approach in its solution. Not fully chatbot, not fully voice, but both working together, and something else altogether: a foundation for Agentic AI — a system that could autonomously interpret the rules, assess each customer's unique eligibility in real time, and guide them through the process conversationally. As the client’s underlying infrastructure was expanded and updated, those agentic features could be expanded and refined, so that each user’s historical usage patterns could be identified and leveraged to proactively anticipate that user’s needs and offer suggestions. This scalability was built into the design from the first iteration, so the evolution would remain seamless, natural, and completely comfortable to the user.
What they asked for
The project was part of a larger revamp of both the client’s IVR and chatbot applications, updating them to apply to their current technologies and services, and leveraging the power of AI to drive the system responses.
What we envisioned
The agentic-feedback approach drove most of the architectural decisions that followed:
Chatbot as the primary modality, with full IVR parity. The chatbot would carry the visual affordances that made vacation hold dates legible — calendar displays, dimmed-out unavailable dates, visible date-range constraints — but the IVR had to mirror the same logic and the same data, so a user could start a vacation hold in chat, finish it by phone, and have nothing fall through the cracks. The two modalities couldn't be separate experiences sharing a vague philosophy; they had to be the same experience expressed through different channels.
A unified data store serving both channels. Cross-channel parity required architectural unity, not just design parallelism. The data store had to be the source of truth for both modalities, with each channel's interface drawing from and writing to that same store in real time. A user updating a vacation hold in chat would see the change reflected immediately in the IVR; an agent-assisted change in either modality would be visible everywhere. No data loss, no synchronization lag, no "let me check that for you" delays — the system always knew where each user was in their flow, regardless of how they got there.
Proactive interpretation, not reactive enforcement. The agentic-feedback model meant the system needed to evaluate rule eligibility before a user committed to a choice, not after. Every interaction point — date selection, duration choice, eligibility check — became a moment for the system to surface relevant constraints conversationally. The user never had to navigate the rule set themselves; the system surfaced what they needed to know, when they needed to know it.
Together, these three choices defined the design space within which everything else got built.
What we built
The execution followed a funnel pattern: eligibility filters at the gate, then within-eligible-user logic for each step of the vacation hold itself.
Eligibility was straightforward — residential accounts, in good standing, at least 30 days old, on a qualifying service plan. Most users either qualified or didn't, and the system could surface ineligibility immediately, with a brief, plain-language explanation of why.
The within-eligible-user logic was where the work got interesting. Vacation holds must be at least 60 days but no more than 270 days, with at least 30 days of uninterrupted service between vacation holds. This all took place within a rolling 365-day window calculated from both the start and end dates of any prior hold. (In all fairness, the partridge in a pear tree did remain optional.)
This is where the agentic-feedback approach earned its keep. Rather than letting the user fail and then explaining why, the system anticipated. As soon as a start date was selected, the system evaluated the entire downstream rule set — can this user create a hold of any valid duration starting on this date? — and surfaced the result conversationally before the user had to commit to anything. "I can do that for you, but the longest hold I can offer from that start date is 9 days. Would you like to choose a different start date, or would 9 days work for what you need?"
Each interaction point was an opportunity to surface relevant constraints, suggest alternatives, and keep the user moving toward a successful outcome. The user never had to understand the 365-day rolling window, the 60-day minimum, the 270-day cap, or the start-date/end-date dual measurement. The system understood those things on their behalf.
(Full eligibility and business-rules specifications, along with the accompanying interaction flow, are available for readers who want the underlying detail.)
What it achieved
The chatbot deployment achieved a 35% reduction in transfers to live agents — a meaningful number on its own, but more meaningful in context: the reduction came not from deflecting users away from the help they needed, but from giving them a self-service path that actually worked. Customers who would previously have given up and cancelled now had a navigable way through the rules, and the agent capacity that used to be consumed by failed self-service attempts was freed for the higher-complexity calls where human judgment genuinely added value.
The IVR companion is rolling out in a subsequent phase, and the deflection figure is expected to grow as channel parity completes — the agentic-feedback model becomes more powerful, not less, when both modalities are live and reinforcing each other.
The bot itself, in the end, was unremarkable to look at — exactly as it should be. Good conversational design becomes invisible when it's working. The interesting work was happening underneath: the rule interpretation, the proactive feedback, the dual-channel data parity, the architectural foundation for genuinely agentic capability. The accompanying interaction flow (with the actual company name fictionalized) shows the underlying logic that turned a Kafka-esque nightmare into something a user could actually navigate.