Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improved fork #6

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 50 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,73 @@

# original idea inspired by
https://github.com/Govind-S-B/ppt_generator

# better version
https://github.com/pleabargain/ppt_generator-1


# PPT Generator
A local LLM assisted ppt generation tool
A local LLM assisted powerpoint generation tool

## Why
Writing presentations for course assignments is just boilerplate work most often, especially when even the lecturers dont even care about it.
Thats why I automated the boilerplate work, just enter a topic and the tool generates a simple presentation , enough to satisfy the base course requirement.
Idea generation with Ollama running locally. This will be useful for generating ideas for presentations. It will pull the available local models and let you select one.

# What
Don't expect a lot of features.


I did add a language option. It's not perfect but it's a start.

It's a simple tool to generate (generic) powerpoint content. It doesn't support images and there is no styling to speak of. It's just a simple tool to generate ppt content. There all kinds of things that can be added to it.

* add image generation
* add styling
* add speaker notes
* add slide numbers
* add slide layouts
* reference color palettes



## Running Locally
install [ollama](https://ollama.ai/download)
and have it up and running with command `ollama serve` ( applicable to some systems only )

download the required model ( this can be changed in this [line](https://github.com/Govind-S-B/ppt_generator/blob/main/ppt_data_gen.py#L24) )


```
ollama pull dolphin2.1-mistral
ollama pull llama3.2
```


clone the repo and move into the directory

```
git clone https://github.com/pleabargain/ppt_generator-1
```


```
git clone https://github.com/Govind-S-B/ppt_generator.git
cd ppt_generator
```

install the required python dependencies

```
pip install -r requirements.txt
```

maybe these too

```
pip install -U langchain-community
pip install -U langchain-ollama
```


run the streamlit app

```
streamlit run main.py
python -m streamlit run main.py
```

93 changes: 90 additions & 3 deletions main.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,107 @@
import subprocess
import streamlit as st
from ppt_data_gen import slide_data_gen
from ppt_gen import ppt_gen
import random
from datetime import datetime
import time

# Define the function to get installed models first
def get_installed_models():
try:
# Run ollama list command and capture output
result = subprocess.run(['ollama', 'list'], capture_output=True, text=True)
# Split output into lines and skip the header line
lines = result.stdout.strip().split('\n')[1:]
# Extract model names from each line
models = [line.split()[0] for line in lines if line]
return models
except Exception as e:
# Fallback to default models if command fails
st.warning(f"Could not fetch installed models: {str(e)}")
return []

# Function to select Ollama model
def select_ollama_model():
models = get_installed_models()
selected_model = st.selectbox("Select Ollama model:", models)
return selected_model

st.title("PPT Generator")

# Input for topic
topic = st.text_input("Enter a topic:")

if st.button("Generate"):
data = slide_data_gen(topic)
# Slider for number of slides
num_slides = st.slider("Select number of slides:", min_value=3, max_value=20, value=7)

# Add language selection dropdown
languages = ["English", "French", "Spanish", "German", "Ukrainian", "Arabic"] # Added Ukrainian and Arabic
language = st.selectbox("Select Language:", languages, index=0)

# Get selected model and language
ollama_model = select_ollama_model()

if st.button("Generate") and topic:
# Initialize progress bar
progress_bar = st.progress(0)
status_text = st.empty()

# List of random status messages
messages = [
"Hold your horses—better yet, knit them a sweater while you wait",
"Time flies when you’re watching cat videos. Try it",
"If waiting were a sport, you’d be a gold medalist. Flex those skills",
"Patience is a virtue, and so is binge-watching your favorite show",
"Why not take a power nap? I’ll wake you up… maybe",
"Hang tight—perfect time for a snack break! Got popcorn?",
"Waiting is just the universe’s way of saying, “Do a little dance!”",
"Hold on—time to practice your air guitar solo",
"Think of this as a short intermission. Grab a soda and some popcorn",
"While you’re waiting, why not perfect your paper airplane technique?"
]

# Shuffle messages to ensure randomness
random.shuffle(messages)

# Function to update status message with a dancing cursor
def update_status_with_cursor(message):
cursor_states = ['|', '/', '-', '\\']
for cursor in cursor_states:
status_text.text(f"{message} {cursor}")
time.sleep(0.25) # Change cursor every 0.25 seconds

# Update status with a random message and dancing cursor
for _ in range(4): # Display each message for 4 seconds
if messages:
update_status_with_cursor(messages.pop())

# Generate slide data with the selected language
data = slide_data_gen(topic, num_slides, ollama_model, language)
progress_bar.progress(50)

# Update status with another random message and dancing cursor
for _ in range(4): # Display each message for 4 seconds
if messages:
update_status_with_cursor(messages.pop())

ppt_file = ppt_gen(data)
progress_bar.progress(100)

# Reset status
status_text.text("Presentation ready for download!")
progress_bar.empty()

file_name = f"Presentation.pptx"
# Create a file name using the topic, model, and current date
date_stamp = datetime.now().strftime("%Y%m%d")
sanitized_topic = topic.replace(" ", "_")[:10] # Truncate topic to 10 characters
file_name = f"{sanitized_topic}_{ollama_model}_{date_stamp}.pptx"

st.download_button(
label="Download Presentation",
data=ppt_file,
file_name=file_name,
mime="application/vnd.openxmlformats-officedocument.presentationml.presentation",
)
else:
st.warning("Please enter a topic to generate the presentation.")
55 changes: 23 additions & 32 deletions ppt_data_gen.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import re
from langchain.llms import Ollama
from langchain_ollama import OllamaLLM


def extract_items(input_string):
Expand All @@ -20,61 +20,52 @@ def extract_items(input_string):
return items


def slide_data_gen(topic):
llm = Ollama(model="dolphin2.1-mistral",
temperature="0.4")
def slide_data_gen(topic, num_slides, model, language="English"):
llm = OllamaLLM(model=model, temperature="0.4")

slide_data = []

point_count = 5

slide_data.append(extract_items(llm(f"""
# Generate title and subtitle
title_subtitle = extract_items(llm.invoke(f"""
You are a text summarization and formatting specialized model that fetches relevant information
For the topic "{topic}" suggest a presentation title and a presentation subtitle in {language}. It should be returned in the format :
<< "title" | "subtitle" >>
"""))

For the topic "{topic}" suggest a presentation title and a presentation subtitle it should be returned in the format :
<< "title" | "subtitle >>
# Check if title and subtitle were extracted correctly
if len(title_subtitle) < 2:
raise ValueError("Failed to extract title and subtitle from the model's response.")

example :
<< "Ethics in Design" | "Integrating Ethics into Design Processes" >>
""")))
slide_data.append(title_subtitle)

slide_data.append(extract_items(llm(f"""
# Generate table of contents
toc = extract_items(llm.invoke(f"""
You are a text summarization and formatting specialized model that fetches relevant information

For the presentation titled "{slide_data[0][0]}" and with subtitle "{slide_data[0][1]}" for the topic "{topic}"
Write a table of contents containing the title of each slide for a 7 slide presentation
Write a table of contents containing the title of each slide for a {num_slides} slide presentation in {language}.
It should be of the format :
<< "slide1" | "slide2" | "slide3" | ... | >>

example :
<< "Introduction to Design Ethics" | "User-Centered Design" | "Transparency and Honesty" | "Data Privacy and Security" | "Accessibility and Inclusion" | "Social Impact and Sustainability" | "Ethical AI and Automation" | "Collaboration and Professional Ethics" >>
""")))
"""))

for subtopic in slide_data[1]:
slide_data.append(toc)

data_to_clean = llm(f"""
for subtopic in slide_data[1]:
data_to_clean = llm.invoke(f"""
You are a content generation specialized model that fetches relevant information and presents it in clear concise manner

For the presentation titled "{slide_data[0][0]}" and with subtitle "{slide_data[0][1]}" for the topic "{topic}"
Write the contents for a slide with the subtopic {subtopic}
Write the contents for a slide with the subtopic {subtopic} in {language}.
Write {point_count} points. Each point 10 words maximum.
Make the points short, concise and to the point.
""")

cleaned_data = llm(f"""
cleaned_data = llm.invoke(f"""
You are a text summarization and formatting specialized model that fetches relevant information and formats it into user specified formats
Given below is a text draft for a presentation slide containing {point_count} points , extract the {point_count} sentences and format it as :

Given below is a text draft for a presentation slide containing {point_count} points, extract the {point_count} sentences and format it as:
<< "point1" | "point2" | "point3" | ... | >>

example :
<< "Foster a collaborative and inclusive work environment." | "Respect intellectual property rights and avoid plagiarism." | "Uphold professional standards and codes of ethics." | "Be open to feedback and continuous learning." >>

-- Beginning of the text --
{data_to_clean}
-- End of the text --
""")

slide_data.append([subtopic] + extract_items(cleaned_data))

return slide_data
2 changes: 2 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
streamlit
python-pptx