How modern Large Language Models ( LLMs ) do Magic

September 27, 2024 #LLMs #A.I. #Chatbots #Large Language Models #LLMs doing tasks #ChatGPT #Finance #Stock Price

Intro

Okay so how do Large Language Models get the current stock price of a company or something similar? Why it's not as intelligent as it seems.

( This is running on my old 8GB RAM laptop and it's not that slow. Innit? )

So what just happened there?

The Language model clearly displays the stock data after I ask it so how does it do it?

Explanation

So basically when you ask an LLM a question regarding the Stock price it sends some of the keywords you mentioned as parameters to the software / system / service it is linked to ( THAT YOU'VE CREATED ) and the software displays the information in a clever way. ( IN THE WAY YOU DESIGN THEM TO INTERACT, no magic guys sorry... Overengineered? Yes ).

TL:DR

Basically the LLM responds with a configuration file and some additional text and the software in the background displays and formats everything for you in a clever way.

You can see how this is overengineered and maybe a bit unreliable and yes there are better ways of getting and interacting with information.

Basically the Magic of LLMs is in how good of a service you have built to be able to take LLM output as input and do something with it.

I do think there is actual utility for this especially as LLMs get smaller and better. But... For now...

You could just :

# Import necessary libraries
import yfinance as yf  # Import yfinance library for fetching stock data

# Function to fetch stock price based on company name
def fetch_stock_price(company_name):
    try:
        # Fetch stock data using yfinance ( Read the docs )
        stock_data = yf.Ticker(company_name)  # Create a Ticker object for the specified company_name
        current_price = stock_data.history(period='1d')['Close'].iloc[-1]  # Get the closing price of the stock for the last day
        return current_price  # Return the fetched stock price
    except Exception as e:
        print(f"Error: {str(e)}")  # Print an error message if fetching data fails
        return None  # Return None if fetching stock price fails

# Main program
if __name__ == "__main__":
    print("Welcome! This program fetches the current stock price for a company from Yahoo Finance.")
    
    while True:  # Start an infinite loop
        # Prompt the user to enter a company name
        company_name = input("Enter the company name or 'exit' to quit: ")
        
        if company_name.lower() == 'exit':  # Check if user wants to exit
            print("Exiting the program.")
            break  # Exit the loop and terminate the program
        
        # Fetch the stock price based on the company name entered
        stock_price = fetch_stock_price(company_name)
        
        if stock_price is not None:  # Check if stock price was fetched successfully
            print(f"The current stock price of {company_name} is ${stock_price:.2f}")  # Print the fetched stock price
        else:
            print(f"Failed to fetch stock price for '{company_name}'. Please check the company name and try again.")
            # Print an error message if fetching stock price failed

To run this paste the code into a file and make sure the filename ends with .py and run :

python filename_orwhaeveryouwannacallit.py # Or follow the steps on how to execute Python code whichever way you usually do it

And then you can do something like autocorrect yourself or make a dictionary file to match Company names with ticker symbols because this will use the company / ticker symbol not name (This is how the yfinance library works). Now I know it's not an exact replacement and you'd probably have to know a little bit about it before you just ask for the stock price but I think it's much more efficient than training an LLM paying around millions of $ for GPUs and then having to program something to take LLM output as input.

Conclusion

The best way to utilise the magic would be to have some agent system with a Big desicision making LLM that talks to other LLM " magicians " as tools when he needs to use them, that's a whole topic as well... And it's far from reliable. Even the best LLM ( OpenAI's GPT-4 ) is laughably unreliable let alone the smaller models.

But we will be making this type of system soon.

This was a quick little tease video and post! If you guys are interested in this send your questions and in the next posts we will go over everything, properly and I will make a little tutorial on how to setup everything and I'll give away the code. It's a bit too much for one engadging blog post and I want to leave you wanting for more!