Markdown Converter
Agent skill for markdown-converter
This project is a **lightweight AI-powered assistant** designed to enhance the **learning experience of students in higher education**. Users can ask questions in natural language based on uploaded study materials (PDFs), and receive intelligent, context-aware answers directly sourced from the selec
Sign in to like and favorite skills
This project is a lightweight AI-powered assistant designed to enhance the learning experience of students in higher education. Users can ask questions in natural language based on uploaded study materials (PDFs), and receive intelligent, context-aware answers directly sourced from the selected topic.
Built with LangChain, Chroma, and OpenAI, the system leverages large language models and a retrieval-augmented generation (RAG) architecture to provide accurate, conversational answers through a minimal Streamlit interface.
The assistant supports:
Ideal for test prep, revision, or flipped classroom models.
| Feature | Responsibility | Implemented By |
|---|---|---|
| PDF Parsing | Break PDF into clean text chunks | |
| Vector Store | Embed & store by topic ID | |
| LLM Chain | Ask topic-specific questions | |
| Admin Portal | Create topic, upload study material | |
| Student UI | Select topic, ask questions | |
| SQLite DB | Manage users, topics, sessions, messages | |
document_loader.py
from langchain.document_loaders import PyPDFLoader from langchain.text_splitter import CharacterTextSplitter def load_and_split_pdf(file_path, chunk_size=500, chunk_overlap=50): loader = PyPDFLoader(file_path) pages = loader.load() splitter = CharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap) return splitter.split_documents(pages)
vector_store.py
from langchain.vectorstores import Chroma from langchain.embeddings.openai import OpenAIEmbeddings PERSIST_DIR = "chroma_db" def create_chroma_index(chunks, persist_directory=PERSIST_DIR): embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=persist_directory) vectorstore.persist() def load_chroma_retriever(persist_directory=PERSIST_DIR): embeddings = OpenAIEmbeddings() vectorstore = Chroma(persist_directory=persist_directory, embedding_function=embeddings) return vectorstore.as_retriever()
qa_chain.py
from langchain.chains import RetrievalQA from langchain.chat_models import ChatOpenAI def build_qa_chain(retriever): llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0) return RetrievalQA.from_chain_type(llm=llm, retriever=retriever)