ChaptersCircleEventsBlog

From Multiplan to Multimodal: A CFO’s 40-Year Tech Journey into AI

Published 04/16/2025

From Multiplan to Multimodal: A CFO’s 40-Year Tech Journey into AI

Written by Jeffrey Westcott, CFO, CSA.

 

I received one of the first Apple Macintoshes back in January 1984 when I attended Drexel University. It was branded the Apple DU with a whopping 128k of memory. And it was the same machine as the Apple Mac, soon to be released to the public. Many of you reading this are too young to remember the iconic Apple television commercial for the Apple Macintosh which aired only once during the 1984 Super Bowl, although many of you, like me, remember it vividly.

Along with this Apple DU came a 3½“ Microsoft Multiplan diskette. The diskette still sits on my desk as a reminder how far we’ve come in the past forty years. Wave after wave of technological advancements – the journey through the years has been dizzying.

For the past eleven years I’ve worked as the CFO of the ents of Cloud Security Alliance, a cybersecurity not-for-profit with offices spanning the globe promoting vendor-neutral best practices for cloud and AI adoption. After being exposed to the recent explosion of artificial intelligence, I felt that it was my duty to jump in and test these unknown waters. I’ve envisioned how artificial intelligence will disrupt the accounting and finance industries, so why not put these large language models (LLMs) to work in both my personal and professional life?

This incredible potential and uncertainty in what lies ahead -- the capabilities, rewards and risks of AI that await and impact our industries, academia and society at large. Some are obvious, while others will be more subtle, yet increasingly present.

So the purpose of this blog, which will hopefully be ongoing, is two-fold. First, I hope to encourage others beginning on this journey to continue forging a path into this evolving technology; and second, I hope that this blog creates the impetus for me to continue this dive into the various AI engines to become more proficient at harnessing the capabilities of these  increasingly powerful models.

 

Starting Out

I started playing around with LLMs over the past eighteen months, attempting to grasp the functionality of AI and how it could make my life easier. I started off with simple prompts and played around more for personal curiosity as I gained a foothold to see what these could offer. Simple random thoughts came to mind:“How was AI used in The Jetsons?” or “How do I adjust the valves on a 1972 VW bus with a 1700cc motor?

These prompts and many others offered insightful and entertaining responses, with the latter being quite accurate in its details, including the tolerances of the valves on the bus (.006” in case you’re curious).

This dovetailed into simple prompts related to work, asking the LLM for feedback on contracts, data trends, impacts of the sociopolitical climate on the cloud and cybersecurity industries...the list, and the prompts that his spawned, is ever-expanding. I decided to write a bit to convey my learnings to you as I continue to muddle my way in my experiences with AI and LLMs. I confided in my co-worker/AI Guru that I felt I was way behind the AI curve, and his response was encouragingly supportive that I was indeed further along than I’d realized.

I was at the Digital CPA conference this past December in Denver and the quote (which has been often repeated by others since then), is that “you won’t be replaced by AI, but you will be  replaced by others possessing a greater knowledge of AI than you.” From an accounting and finance perspective, rapid change is afoot, although this could be said of virtually any other industry.

 

Initial Tools

My personal journey began with the free version of ChatGPT. This quickly evolved into purchasing the paid version for twenty bucks a month, and most recently the enhanced two-hundred dollar a month version of ChatGPT. I’ll provide more details and examples in future posts once I figure out the differences and benefits of the different models.

One significant item of note: be careful of any prompts entered into Chat GPT containing sensitive data. The data that you’ve freely provided in your prompts may fuel the results of others’ prompts (meaning that your data is not secure). You can change your settings to protect your sensitive or proprietary data. Go to your profile in the upper right-hand corner, Settings > Data Control > Improve the model for everyone – set this to off. This example is for ChatGPT 4o, and internalizing your data only be an available option on the pricier version.

I have also added Claude to my repertoire. My initial thoughts are that Claude is better for qualitative prompts, while Chat GPT is better for quantitative and analytical prompts and data retrieval, but I will discuss my findings in later posts.

At the suggestion of a colleague, I have prompted Claude to assist in creating prompts, such as “Can you please assist me in how to use Claude for data analysis and other forecasting models?” This is a very rudimentary prompt, but it gets the creative juices flowing for innumerable prompts to follow. The prompts will get more elaborate and can build upon the earlier responses. I am basically carrying on a conversation with Claude, which eerily reminds me of HAL 9000 of 2001: A Space Odessey (although Claude is a bit less antagonistic). By comparison (in my opinion), Chat GPT (and its multiple models) feels more mechanical in its responses.

 

Looking Forward

I have been uploading Excel data into Chat GPT and will touch upon that in future posts, as well as privacy and security concerns, the differences between the LLMs, GenAI, and any other topics that rise to the surface as I navigate into these everchanging uncharted waters.

I hope that you will join me. Play around. Ask the dumb questions. Tweek them and prompt the LLM again. Like me, you’ll realize how to formulate effective in increasingly complex prompts rather quickly, and changing or adding a word or phrase can produce vastly different results. Learning to crafting effective prompts can be very powerful. And the LLM will not get annoyed by me asking similar prompts over and over again (unlike some of my human peers).

The Macintosh moment in 1984 pales as compared to what is presently upon us; we are experiencing a transition occurring at breakneck speed. Where you or I fall on this learning curve is anyone’s guess, but the fact that you made it to the end of this post indicates that something has piqued your interest. Hopefully you will find this beneficial, and I hope to keep posting as I make sense of these increasing winds of change that are upon us.

 


Jeffrey Westcott is the CFO of the Cloud Security Alliance. He has worked with the CSA since 2014, and resides in Bellingham, Washington.

Share this content on your favorite social network today!

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates