Meditation Day 6: Thoughts Are An AI Large Language Model Token Generation Machine (August 8, 2025)

In working on various analogies for thoughts, the second best analogy my meditation teacher, Micky, shared with me was how clouds form in the sky. If you were to describe the process, you might say that a wisp of a cloud forms and then grows over time. That’s what happens in our minds with our thoughts. While I might think that a thought is presented “fully formed” that’s not what is actually happening. As I deepen my meditation practice, I’m learning to catch my thoughts as they arise and not fully formed. I’m still quite a beginner at this practice, but it’s a fascinating exercise.

But then Micky shared what I believe to be the best analogy for thoughts when he said, “Thoughts are an AI Large Language Model Token Generating Machine.” I knew precisely what he meant because of my own platform, AskBill.us.

If you’ve ever used my AI platform, you’ll notice that it feels a lot like talking directly to me. That’s because my AI is running in the background on tokens. A token allows Large Language Models to break down complex thoughts and ideas into smaller chunks and serve it up on an “as needed” basis.

In other words, an AI Large Language model doesn’t actually know the answer it’s giving you AS it’s giving you the answer. This is true for platforms like ChatGPT and several others. You ask a question and the system begins generating an answer — in small chunks rather than the whole complete answer all at once.

Our Thoughts Are Incomplete

This is how our brain works. When a thought is formed in our minds, it’s like that wisp of cloud or a single token being used in an AI Large Language model. Our brains are so smart and good at prediction that even as the thought is forming, we are anticipating what the thought is and in some ways guiding the outcome.

But we don’t have to. Instead, we can stop the thought as it is being generated. Rather than filling in the blanks and guiding the thought to it’s probable completion, we can, instead, prevent the cloud from fully forming and request the AI Large Language model to stop after the first token has been delivered.

This is incredibly helpful to understand as our brains produce so many thoughts. Sitting for a full hour in meditation skill development, I estimate that I’m catching a new thought about every 3-5 seconds. As soon as I recognize that this is a thought, the thought fades and disappears.

What I’m learning now, however, is that I don’t have to wait for the thought to be fully formed. I can catch the thought as it is forming. This has the benefit of speeding up my skill development training and gently interrupting the thoughts as they form. I need not spend time allowing all the tokens to create the fully formed thought. Instead, I can witness the beginning of the thought and catch it there before it is even fully formed.

This Isn’t Limited to Our Thoughts

I should point out that while the focus has been on catching my thoughts as they form, this same process is usually what happens when we begin speaking or even as I write this blog post. Sure, I had a general idea of what I wanted to say, but the exact words are being produced on a moment-by-moment experience. Each sentence or two is essentially an AI Large Language model token being generated in my brain and as the tokens keep being used, this paragraph was formed.

This is also how I now deliver global keynote speeches. Sure, I have an idea of what I want to say and the overall content of the session, but where I used to memorize lines and stress out about every word coming out of my mouth, I now allow my eyes and ears guide me as I’m real-time listening to my audience. As I speak, I’m witnessing the reactions of my audience and fine tuning as I go. That is what good AI does based on your response to the answers it gives.

This is even how I ended up writing Sage Business Development. I carved out 1 hour (upto 2 hours) per day and wrote non-stop. DId I know precisely what I would write in that 1-2 hours? Absolutely not. I had an idea of where I wanted to take the book based on which chapter I was working on, but as I put my fingers on the keyboard, the proverbial AI tokens were generated and each paragraph led me to the next and the next until a full 350 pages completed the book.

Unconditional Inner Peace

All of this is in the service to generating unconditional inner peace. As I’ve losended my grip on thinking that I’m completely in control of my thoughts to seeing my thoughts as mostly random and sporadic, I can let go of any attachment I had previously held on “my thoughts.” Instead, thoughts are just happening in my mind — all the time.

When I see that I am the awareness noticing these thoughts as my six sense, I can choose how I want to react (if at all) to these thoughts. The less I react, the more peaceful I become. As I work with the disturbances around me, I’m strengthening my ability to make a different choice and not be in reaction to these disturbances. Moreover, I’m increasing my ability to relax and increasing my peace around each of these disturbances — including my thoughts which I now see I can interrupt before they fully form.

I believe this is extremely helpful for me and I’m already noticing my slowing down and the increase in my presence. I realize that when I complete this retreat, there will be some “return to the norm” happening, but I also acknowledge that new skills are being developed and that those new skills will serve me for years to come.

Previous
Previous

Meditation Day 7: Insights Are Useless Until Developed Into Skills (August 9, 2025)

Next
Next

Meditation Day 5: Acceptance, Compassion and Myelin (August 7, 2025)