Reimagining AI Assistants / by Noel Childs

Who really benefits from our current digital assistants? In this inspired by my NoelAI miniseries on my design podcast, A76, I go on a deeply personal and sharply critical journey of the current state of AI assistants. And ask why—after years of hype—we still don’t have an assistant that truly serves us.

Blending design thinking, cultural critique, and speculative hardware, I posit a radically user-centric vision for an AI assistant built around human needs and not tech company agendas. What would it look like to ditch your phone and have something smaller, smarter, and more human by your side? And more importantly, what’s standing in the way?

This is a call to reimagine. To push past uncanny interfaces and shallow convenience into something more meaningful and ultimately demand more from our digital tech.

Who doesn’t want an assistant? Someone to take care of all those boring tasks? Who’s always there when sudden things pop up. Not so much a Samanatha for your inner Theodore Twombly, and something more then a Peggy for your Don. A guide, a companion, a backup, someone riding shotgun with you. And only for you. 

And while most of us will never experience the indulgence of having an actual personal assistant, there is a little glimmer in the promise of personal digital assistants. And this is why I created the series ironically called NoelAI, because I wanted to explore the reality of existing digital assistants, the potential of what they could become and ponder the question of the uncanniness of a digital me. And this idea sparked in my brain years ago while working on a chatbot ~ sort of the dumber, simpler proto tech for AI assistants. I realized at the time that wether its a chatbot, siri, ChatGPT or really any other query that someone makes to a computer, we can’t help but anthropromorphize the device we’re interacting with and feel as if they’re talking with it. And if you’ve listened to previous episodes of A76, you know I’m not surprised at the rise of conversational UI. It’s in our nature to be social and connect.

But NoelAI is not just a flip name for a podcast series. I genuinely have wanted an assistant to help me with my life, not because I think I can’t cope or I’m so entitled that I deserve one. I’m also lucky to have family and friends in my life. Its born out of the desire to take repetitive tasks off my plate, automate the things I don’t need to spend brain power on and gain insights on myself, that can make me a better, more efficient, more content, human being. But when I take a step back; really look at the state of AI assistants…set aside the hype and shiny promise of ChatGPT, rhe experience of Siri and take a discerning look at all the assistants that exist. I ask myself, do they fulfull their promise? And will they ever? Or are they all just another shimmering future promised over the tech horizon?

Maybe my expectations are too high. And maybe the tech won’t get there for a very long time. Google has been running a campaign for months showcasing their Google assistant on their mobile devices and its so underwhelming. Booking a lunch reservation, where the phone knows your preferences? There has to be more then that. Apple has been struggling to truly launch Apple Intelligence and looking at their features they’re showcasing is a big bag of well, meh. 

I recently listened to Adam Conover interviewing Karen Hao, author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Its an eye opening book. You should read it. But the interview, for me, kicked open a door I was turning the handle on anyway. What is going on with large tech companies and AI? And obviously you know I’m bullish on AI. But the recent moves, aligning with the Trump regime, investing in massive data centers, Sam Altman’s running dialogue around AI infrastructure, investment, and global partnerships. There is a game being played on a new level that couldn’t be farther from user-centric AI assistants. These companies are chasing a very specific approach to AI needing an insane amount of resources—specifically data centers processing highly intensive data, the rare metals for the centers and an eye watering amount of electricity. The infrastructure they’re chasing is obscene.

As a user-centric designer that cares about people and the earth. I’m at the point that I’m questioning all the tech brands strategy. And of course this isn’t new and its totally in the zeitgeist now. But also what’s old is new. It’s just the latest version of empire building at all costs and honestly aren’t we all a bit exhausted by big tech? Something different is needed. AI doesn’t have to be constructed with this approach. There are real projects proving out a leaner, more efficient approach with scalable results. And that’s the critical backdrop we find ourselves in. So where do we go from here?

Before we get into my concept for an AI assistant, let’s talk about what I already covered in the first three episodes of this series.

In part one (Patreon) of the NoelAI series I questioned how useful digital technology is in our lives. Which is on its face, admittedly a ridiculous question. Technology has and continues to shape our lives in profound ways and as annoying as it may be to folks that have grown up with our current tech and not known any different (digital natives their so often called) any of us that clearly remember a world without home computers or mobile phones can know that our culture is has not only completely changed but it could also look very different.

And if pressed, I could easily argue that it’s actually been quite negative. Is that overly simplistic? Yes. Overdramatic. Yes. Do I sound like an old man. Well, I am an old man. I know it’s a bit Luddite-esque. But I know I’m not the only one who’s feeling this way. And its not just making a provocative statement. My issue with all of it is— digital tech too often serves one group – the company’s that have created it. It almost always comes from one group of people—white, male, silicon valley tech bros. It’s a generalization for sure but that doesn’t mean its not true. And because of that its core intention is often not user centric or serving a wide range of diverse people.

Is this a dangerous thought? Yes, it is actually is for the tech world. They want you to buy and engage and not think too much about the wheres and the hows of everything. 

Recently Sam Altman and Jony Ive announced a deeper partnership through a bizarre video, (trademark dispute pending of course). You should watch it. It’s fascinating and so unsettling. And I’m both excited by the advancements in GenAI and super appreciative of Ive’s impact on the design world. But also, I can’t help but wonder what new pivot hell they’ll drop on the world. It all felt very weighty. Do we think these 2 brilliant, but flawed men will deliver something completely different?

In part two (Patreon) of the series, I talk about mapping behaviors throughout your day and truly understanding how you spend your time. Having a critical eye on it takes a combination of self-reflection, thinking about your life and setting aside the stories that we tell ourselves about our world. Obviously all of our lives are important and what we do with them but we spend a lot of time in our heads, sometimes not physically doing much, thinking about the future or immersed in scrolling. No judgement. I’m right there with you. We live in a messed up world but its also critical we understand how we spend our time to have a meaningful relationship with technology. Call it a splash of reality and a nod to being present.

And on the flip side of that, our apps and digital experiences should always be delivering utility. Usefulness. Meaning. That’s the exchange. Brands deliver experiences, hopefully useful, we engage with them and they make money off of that. And in that episode I explored a way of assessing the value of something as simple as the apps on our phone. Our phones! Has there ever been a time in history where there was one item that was seemingly so important of us. And yet we all have a few useless apps installed that only benefit the companies that made them. Its bonkers. Purge your phones of garbage, folks!

In part three (Patreon) of NoelAI, I explore a more positive perspective and focus less on barriers. I won’t go into detail here since I’ll lay out my recommendation later on. But suffice it to say, dualistic thinking is important as a means to deepen our perspective. I’m notorious for arguing the other side, even in the same conversation but its all deliberate. I think its important to look at all facets of things. And in this episode we knocked down previously discussed barriers and tensions in a positive way. 

So with that background—what could a highly useful, engaging user-centric AI assistant experience be like? 

Here’s My Assumptions
A successful AI assistant wouldn’t necessarily need to live on our phones but it would need to connect to our full ecosystems. 
A successful AI assistant would need to work with iOS and Android both and be system agnostic with the ability to turn on and off settings at a micro-level.
A successful AI assistant would need to train on as much data that exists about you. And continue to learn, based on interactions and behaviors. The more data, the more useful.
And because of that, a successful AI assistant would need to have a high level of security and privacy. Highest level of encryption. Biometric authentication. Anything less is a non-starter.
A successful AI assistant should limit its connections to the internet, staying as local as possible to you.
And a successful AI assistant should be additive, interesting with a compelling even entertaining personality. Why be boring?

And one of the best ways to illuminate my approach is to start at the beginning. Here’s what I want out of an AI assistant experience. And that begins with a soft intro into onboarding.

Onboarding
You’d arrive at a simple, intuitive site spun up by AI with a limited set of questions conversation style Questions like..

  1. What do you want to get out of an AI assistant?

  2. What kind of relationship would you have with it?

  3. How open are you to sharing your personal data?

  4. Are you ok with the AI connecting to all of your accounts?

  5. Is it important that the AI give you insights about you, personally?

Your open ended answers would determine the details of the device shipped to you. It would also ask for a voice sample which would tether and authenticate the device to you. There’s obvious personal risk to this, so I’d want a strong privacy approach. Also, the enterprise should be built as lean and flexible as needed. Think high quality, personalized products from the beginning. My device would have a front & back camera but no screen. More on that in a bit. Connect to cell/wifi network. Higher end bluetooth for airpods. GPS, Accellerometers, etc for location tracking. DRAM for the operating system. Audio with secure voice tech. Good battery with long life. 

I’d be interested in the details of all of this, so the online experience would give me an overview. And I could ask questions about the specs and based on some initial napkin sketches via consulting AI, the size of the device would probably be a about 2 x 4”. Maybe even smaller because I chose not to have a screen. And remember your device would be tailored to you and specific to your needs. This is not in the realm of science fiction. Check out the advancements in 3D printing, materials and manufacturing.

Of course the material it’s designed from would be important to me. We fetishize our tech devices and I’m no different. I’d want my shard of a device to be gun metal, with a little shine. Round edges with no branding. Something I’d set on the table to share or be comfortable in my coat pocket feeling a little bit of solid weight, but not so much that it couldn’t sit snuggly in my pocket.

And after confirming shipping details, it would arrive in secure but minimal packaging that could be reused. you’d take it out of the box and turn it on, simply authenticating on startup with your voice sample previously captured.

It would welcome you, ask you to put in your airpods and start with 3 simple things you’d need to do first. Simple engagement. Slowly getting onboarded, Setting expectations for yourseld and the AI. Enjoying the experience at a human level and pace.

  1. First I’d finalize the relationship I’d want to have with the AI assistant. It could be a close friend, a coach, a professional assistant, a brother or sister, a sage. There could be amalgam’s of these too. This would help the AI assistant with context and help set guard rails for interactions. I’d probably choose a close friend & sage.

  2. Then I’d get a chance to name the AI assistant. This is more important then it might seem. Whatever you call it, it would use that name moving forward, and embody that name as much as possible. In many ways, this breathes life into the assistant. I’d give it a clever name, maybe an iconic footballer or poet.

  3. And finally, I’d define a period of time of where we’d learn about each other. I’d probably give it a month. A good amount of time to give to a new relationship without putting too much expectation on getting everything out of it. 

Once that’s confirmed, the AI assistant would ask about communication styles and boundaries. Often we do this without thinking too much about it with new people we meet. But with this AI, establishing these expectations—on both sides, that’s the key—would set us up for a good relationship.  We’d define the hours when we could chat and the greetings to activate. I’d probably agree to 24 hrs a day. Because if the AI needs to connect with me in the middle of the night, there would probably be a reason for it. And if that didn’t all work, we’d have 30 days to sort it out.

And we’d probably not do any other refinements at this point. The AI would remind you that that it wouldn’t be totally accurate or useful quite yet, until we got to know each other. And finally it would confirm the data it would be learning from. This is critical. And would be different for everyone. For me, if I knew that my data was secure and trusted the company, why wouldn’t I connect to essentially everything in my life? How would that function and would I trust it? This is the crux. More on that later.

You’d agree to the approach, and a 30 day trial would begin. Which brings me to cost. I believe that if brands approached subscriptions and fees more flexibly, it could benefit both sides. I like paying a one time fee, for the life of the product up front amd not being nickeled and dimed…specially if the cost was right. But I know that’s not for everyone. Why not offer options? Including engagement based. It could be beneficial for both sides. And we see this with AI tools, with pricing being determined by options like time spent or tokens.

Then if we flash forward 30 days ahead after the trial, the AI would politely remind you and you’d probably have a few quick things to chat about before moving forward. Things around Privacy and Transparency and how it would gains insights and examples; deciding the Payment approach and any other housekeeping admin things to take care of to deepen the relationship. Plus a chance to change any aspect of our interactions or even start over. That control and option would be so important. 

Design Principles
Which brings me to design principles that have manifested from concepting this AI assistant.

  1. Relationship building that takes time, grows on interactions and build trust

  2. Flexibility over how we approach our conversations, giving me control

  3. Transparency about what data is being used and how

  4. Control over interactions and depth of engagement

  5. Respect within my boundaries and approach, what I’m comfortable with

These are the big pillars that rest on a foundation of usefulness and the system being human centered. And this is also when we come to a watershed moment. After establishing trust with the AI and getting used to my interactions with the system. Why wouldn’t I just put my airpods in, get rid of my phone and put my new smaller, highly connected device in my pocket? And Think about that. The simplicity. The power in taking control. Walking away from your phone. Not looking at a screen for 6-8 hrs a day but still being connected to all things online. And I could do it because the AI would know my preferences, calendar, when I want to do certain things, the systems I connect to, my life. 

But we’re getting ahead of ourselves, surely. In the 30 day trial the device would be actively listening, seeing my interactions with my phone, my life. And at some point becoming more useful then my phone via automation, suggestions, and focusing on me. I only talk on my phone to a handful of people. It could easily bring those up and call them when needed. I text more, but my texts tend to be short. Easily covered by voice texting or audio messaging. I hate emails on my phone and so do you. Come on, you do too! We have been over saturated with screens and they have become a crutch. So much could be done with audio. 

And of course I’m lucky that I can hear, that I have the funds for good airpods. And a cell plan that’s fast and well connected. I understand the position I’m in. It’s a privilege to think of this experience in this way. But think of it. The AI assistant would be a collection of agents that hand off tasks to each other. Only checking in with me when its necessary. And that includes creating agents on the fly seamlessly without me knowing it. 

The agentic AI world we’re already moving into can handle most things we ask of it on our phones. And Our phones are already the connector and authenticator for who we are. And if you’re saying to yourself, that’s it? A voice enabled AI is the future? I’d say to you, that’s your screen addiction talking. But also, this foundational level of flexibility would be essential. It would be different for everyone.

And before you hit stop on this episode and put your phone down. Before you roll your eyes and bring up the use cases where you need a screen, where graphic UI is critical, maybe even the pleasure of viewing photos, video, etc on your phone. I’d say ~ in this concept, I never said to get rid of all screens. Flatscreens and computer monitors are ubiquitous. And the more our accounts and information is in the cloud, the less important individual ownership over devices like ipads and laptops are. 

So in my concept, I propose leaning into secure screen casting from your device to all the various surfaces around us. With sensors, and hand gestures the camera picks up, we could utilize the screens around us if we needed to see something. The UI could be simple and mostly voice based. There are protocol and privacy issues for sure, but that doesn’t mean it couldn’t be done. I love the idea of highly intelligent, personalized device that I navigate mostly through audio. But for the 20% of use cases or pleasure around viewing a screen, my device could seamlessly connect. 

And think about what types of experiences this concept could deliver.

Here’s a simple one based on travel 
The AI would know the days I travel from my home in the country on the train into Chicago. When the conductor comes around and asks for tickets it could securely send a blockchain packet of my authenticated ticket to the conductor knowing the conductor’s near by by active listening. Maybe nfc on my device could even be detected by the conductor showing that my fare was paid. Or maybe my ticket could even show up on they’re screen. And of course that’s jumping past the AI looking at the schedule and automatically purchasing a ticket for me. Plus  knowing when I’m going to the train. It has access to my calendar, it sees me moving through my typical route, and I’m ok with it listening to me within certain hours and locations of my life; so it could easily piece together all of that information. I wouldn’t even need to confirm purchase of the ticket because the price would be within a range I’m comfortable with which the AI would know from previous conversations. It’s a small moment and one that sounds like science fiction, but its attainable in the near future and is the level of seamlessness 

Or let’s think of moments of transition 
I run most mornings. Of course the AI would know that based on my behavior, Strava posts,, health data but it would also know when I wasn’t feeling it. Which opens up a whole aspect of coaching which would be interesting. But setting that aside, most of the time I do the same digital things when to get ready for my runs. The AI would know that I was close to running and turn pre-running mode on my running watch, allowing it to acquire the GPS. It would make sure my Strava is connected to my partner and it would open up my running playlist. And then it could sort of do it all in reverse. Except I like to decide when to end my run, which I would do, but that could also trigger a series of transitions to recovery. Strava has its own AI integration that summarizes my run in context, which it could take from it and read while I drink protein and cool down. And it would also know that I like to double check my calendar on things I have scheduled for the day, which it could summarize for me. And then let’s think about the Strava app. If the AI has access to all my logins, it would integrate all the features and manner of contextualizing my activity info. And to take it even further, if it spun up an agent that focused on all things strava-esque, why not cancel my strava subscription. And I could suggest to the AI improvements I’d want to make to the experience design of tracking my runs. 

Or another experience around nudges
I’m relatively disciplined around the times I write, run, design and play. But I’m also human and have my moments when I’m just not motivated. And having a clear idea around elements of psychological, emotional and situational factors or physical issues that I’m blind to would be game changing. The AI would have good insight into most factors of my life via messages, health data, social posts, shopping behavior to name just a few. And external factors if allowed, such as weather, local news, trends. Adding the layer of knowledge it has around goals, aspirations and desires; the AI could go beyond just delivering the calendar of the day’s activities. And suggest when and where I should write, run, design and play. So often its just getting up and faking it til you make it, or puting on those running shoes that’s the hard part. Making suggestions of content, starting a design for me that I finish, giving me motivatal ideas that were highly personal and nudges that were meaningful all could have a profound impact on my life.

And these 3 examples are just a handful of thousands of ways the AI could connect and automate, creating more space and time, and truly being helpful. A lovely, seemingly magical way of going through your day.

Now let’s talk about why none of this is reality anytime soon. And why my whole concept won’t work. And for reasons that may surprise you. I’m fine with an AI killing off my phone as it exists now. Automating my life. Knowing my logins and personal data. I’m ready for an agentic AI world that automates swaths of my digital life. Many of you might be as well. But the tech industry doesn’t operate from a place of user-centric flexibility. iOS and Android would need to be drastically more open. The protocols aren’t compatible with each other. Generally all platforms compete with each other. And our phones make too many connections out to the cloud; which is of course amazing but takes us down a specific path. Maybe these technical issues could be solved over time. But iOS and Android would need to change or be incentivized to collaborate. Seems like a heavy lift. 

But even if that wasn’t the issue. There’s no tech company currently that could or would deliver the type of experience I’m illustrating. Culturally they just don’t think this way. And if we look at the current big players.

OpenAI with its partnership with Jony Ive is probably closest. And from what’s being shared, it sounds like they will deliver a new device (not a smart phone or laptop) that will be intimately connected to you. But you know they won’t ship anything profoundly different because of who they are, where they’re coming from and what their intentions are focused on. I’m sure it will be lauded as groundbreaking, but will it actually be that?

Apple has the brand trust, the infrastructure, they talk a good user-centric game but recently they’ve struggled to deliver Apple Intelligence. And they’ve delivered a closed ecosystem for decades that’s not changing. Plus they’re the establishment now. They unfortunately can’t seem to deliver anything but incremental innovation. 

Google suffers from platform bloat across their whole enterprise. Could they spin off a new company to focus on this specifically—absolutely. But also there’s not necessarily precedence for that.  

Of course this is just my opinion. And who the hell am I? Well, I’m a consumer of digital experiences, who also designs them and most importantly I’ve been promised a type of experience that hasn’t been delivered on. So I have thoughts, but also so do you! And I spend my money expecting that these brands deliver on the shiny future they love to present in launch days. 

So my concept for a personal AI assistant demands more. I want control and actual personalization. And a high level of security. And I’m willing to hand over control and kill off my phone for deep value. And I want something new that’s focused on me. And I’m also ready for throwing my phone in the trash and ending the era of doom scrolling screen addiction. Because it’s not working anymore. And I know I’m not the only one that feels that way. It’s the antithesis of user-centric and because of that, not assisting me. 

And finally, let’s talk about the uncanny. The weirdness we all feel when a device knows so much about us and makes recommendations, or AI generates seemingly intelligent responses. Let’s talk about that in detail because AGI is an existential threat that I hear people talk a lot about.

The human brain is wired to detect and recognize patterns, a skill honed over millions of years of evolution. This ability is crucial for survival, allowing us to identify potential dangers, resources, and social cues. This can be described as Innate Pattern Recognition

We also develop mental frameworks based on our experiences, which help us organize and interpret new information. When new information doesn't fit our existing schemas, our brains might attempt to "force-fit" it, leading to the perception of patterns that aren't actually there. Call it Schema Formation and Assimilation

But also there’s a tendency to perceive meaningful connections or patterns where they don't exist. It's a normal cognitive process that can be beneficial for learning and making predictions, but it can also lead to errors in judgment. Apophenia  and Pareidolia are examples of this

But why have we developed this way? Pattern recognition has played a significant role in survival throughout human history. For example, the ability to distinguish between edible and poisonous plants or to identify prey and predators has provided a survival advantage. It can be called an Evolutionary Advantage.

Humans have a strong desire to find order and meaning in the world. Pattern recognition helps us to understand and predict events, even when the underlying processes are complex or chaotic. In essence, our brains are constantly seeking patterns and meaning in the world around us, a process that is both a powerful tool and a source of potential errors in perception. 

We talked about the weirdness of it all. But I would argue that the weirdness around AI… is actually an important flag we should absolutely pay attention to. Just not the one so often talked about. And for me, its importance is this – tech needs to be more human. More meaningful. We need to be able to connect our experience with the devices and digital experiences around us to reality. Not the other way around. I’ve read someone say on social, that the internet is the real world now. No, its the internet. And centuries of evolution built into my genes tells me this. 

But there’s also a huge opportunity for tech to integrate at a deep human level. One in which everyone wins. Think of the adoption levels and how lucrative it could be if done right. And the experiences could operate on a level barely scratched at currently. And I believe my approach for a hyper personalized truly useful AI assistant could begin to deliver on that idea.

So where do we go from here? I asked ChatGPT to give me a summarized way forward based on my perspective. This is what it wrote:

“As we stand at the threshold of what’s possible, it’s clear that the future of AI assistants won’t be unlocked by more features or faster models alone—it will require a fundamental shift in how we design, build, and relate to these systems. The way forward lies in embracing radical user-centricity: assistants that earn our trust, respect our boundaries, and truly serve our lives, not the interests of those who build them. That means diverse voices shaping the tech, transparent practices around data, and flexible systems that adapt to our uniqueness instead of forcing us into pre-built molds. If we get that right, we just might arrive at a world where talking  to your assistant feels less like issuing commands—and more like collaborating with someone who truly gets you.”

Scaling via Radical user-centricity. I ask you, will any of the tech brands deliver this? They of course could, but if you believe the answer is no, then I ask you why do you settle for the devices we have? And why do we allow the value prop to tip towards tech brands having all the power? And what should we be doing with our devices we use, the social media we engage with the platforms we utilize every day?

Ready to experience an AI Assistant that truly assists you?
Ya me too. Let’s go build it, together.