LuminAi

cover image

Model Description

Lumin.AI is a supportive AI assistant designed to provide immediate emotional support to individuals outside of regular consulting hours. It acts as a supplementary tool for patients and therapists, ensuring that mental health care is more accessible and responsive to users' needs

Model Demo

Demo Demo

Model Dataset

The chatbot has been trained using conversational data, which is supposed to mimick the patient and the therapist. 5 topics where chosen, and 100 conversations from each of these topics were gathered:

  • General
  • Relationships
  • Insecurities
  • Victim Mentality
  • Self-Improvement

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("alvinwongster/LuminAI")
model = AutoModelForCausalLM.from_pretrained("alvinwongster/LuminAI")

prompt = "What is depression?"
full_prompt = f"User: {prompt}\nBot:"

inputs = tokenizer(full_prompt, return_tensors="pt")
inputs = {key: val.to(device) for key, val in inputs.items()}

outputs = model.generate(
  **inputs, 
  max_new_tokens=650,
  repetition_penalty=1.3,
  no_repeat_ngram_size=3,  
  temperature=0.8,  
  top_p=0.9, 
  top_k=50  
)

response = tokenizer.decode(outputs[0], skip_special_tokens=True)

if "Bot:" in response:
  response = response.split("Bot:")[-1].strip()

print(response)

Model Metrics

To evaluate the chatbot's performance based on our use case, the following weighted metrics system was used:

  • Empathy Score (40%):
    • Measures how well the chatbot responds with empathy.
  • Human-Likeness Score (20%):
    • Assesses how natural and human-like the responses feel.
  • BERTScore (30%):
    • Evaluates semantic similarity between chatbot replies and therapist responses. Split equally between F1, Recall and Precision
  • Time taken (10%)
    • Time taken to generate a response, a shorter time optimizes user experience
Metrics GPT Llama LuminAI
Empathy Score 0.8 0.79 0.79
Human Likeness 0.27 0.45 0.5
BERTScore F1 0.45 0.48 0.51
BERTScore Recall 0.51 0.53 0.55
BERTScore Precision 0.41 0.44 0.47
Time Taken 89.65 15.85 39.42
Total Score 0.54 0.65 0.63

Github Link

Visit here for more information on how I trained the model

Try the product here!

Downloads last month
5
Safetensors
Model size
362M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for alvinwongster/LuminAI

Finetuned
(93)
this model
Quantizations
1 model