Large Language Models(LLM)
February 5th, 2025

LLMs are eating up the software development space from back-end to front-end.

From Natural Language Processing , to summarising a text document, to financial statement analysis may be we should say AGI is coming on us. Here are some required must have skills necessary to effectively understand and use LLM’s to achieve advantage. Firstly intermediate level programming experience in python plus some bit of technical knowledge about Large Language Models (a.k.a LLM) terms like Embeddings, structured-outputs, tools and function calling

what is a Large Language Model.

Imagine a computer program that has the capability of performing human level task, an engine like the central nervous system with all actions capability loaded with all of the knowledge , experiences, actions imaginations and has human level understanding of our natural language yet is a computer program that human kind intuition that is what a Large Language Model is.

Prompt Engineering:

To engage human level interaction with a Large Language Model requires you to describe to an LLM model what it should do using ordinary text or speech. The more detailed the description and the precision of your choice of words , the better the final output. Engineering a prompt statement to obtain a premium result from a language model requires that you know how to set your temperature parameters right. Model parameters enable you to obtain a more composed and precise response from a Large Language Model. For example every LLM has some adjustable parameters known as Temperature. Lower temperatures such as zero give a more definitive results, a result that is conclusive. Also token selection parameters such as topK and TopP parameters enable the model to select tokens that match your response .Here are some few examples to try on your own.

pre-requisites include having python 3.10 - current version installed in addition to Gemini.

import google.generativeai as genai

genai.configure(api_key="GEMINI_API_KEY")

model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content(
    "Give a detailed explanation of  how AI works",
    generation_config = genai.GenerationConfig(
        max_output_tokens=1000,
        temperature=0.1,
    )
)

print(response.text)
Subscribe to edbert kay
Receive the latest updates directly to your inbox.
Nft graphic
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.
More from edbert kay

Skeleton

Skeleton

Skeleton