Skip to content

Commit

Permalink
CONTRIBUTING.md, CODE_OF_CONDUCT.md added and README.md modified
Browse files Browse the repository at this point in the history
  • Loading branch information
camilo-basualdo committed Nov 29, 2023
1 parent 0dcce60 commit 946f2fa
Show file tree
Hide file tree
Showing 11 changed files with 586 additions and 79 deletions.
71 changes: 71 additions & 0 deletions CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Contributor Covenant Code of Conduct

## Our Pledge

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

## Our Standards

Examples of behavior that contributes to a positive environment for our community include:

Demonstrating empathy and kindness toward other people
Being respectful of differing opinions, viewpoints, and experiences
Giving and gracefully accepting constructive feedback
Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
Focusing on what is best not just for us as individuals, but for the overall community
Examples of unacceptable behavior include:

The use of sexualized language or imagery, and sexual attention or advances of any kind
Trolling, insulting or derogatory comments, and personal or political attacks
Public or private harassment
Publishing others' private information, such as a physical or email address, without their explicit permission
Other conduct which could reasonably be considered inappropriate in a professional setting

## Enforcement Responsibilities

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.

Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

## Scope

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at incidents@leniolabs.com. All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident.

## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

1. ## Correction
Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.

2. ## Warning
Community Impact: A violation through a single incident or series of actions.

Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.

3. ## Temporary Ban
Community Impact: A serious violation of community standards, including sustained inappropriate behavior.

Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

4. ## Permanent Ban
Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

Consequence: A permanent ban from any sort of public interaction within the community.

## Attribution

This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/), version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.

Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).

For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
10 changes: 10 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
## Contributing

We welcome contributions from developers of all skill levels! If you'd like to contribute to the AI Dashboard Generator, please follow these steps:

Fork the project repository.
Create a new branch for your changes.
Make your changes and test them thoroughly.
Commit your changes and push them to your fork.
Create a pull request to merge your changes into the main project.
Before submitting your pull request, please ensure that your code adheres to the project's coding standards and includes any necessary documentation or tests.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,4 +250,4 @@ If you want to see usage examples, we provide the following Colab notebook for y

## Creators

promptwizard is crafted with love by [Leniolabs](https://www.leniolabs.com/) and a growing community of contributors. We build digital experiences with your ideas. [Get in touch](https://www.leniolabs.com/services/team-augmentation/?utm_source=promptree&utm_medium=banner&utm_campaign=leniolabs&utm_content=promptree_github)! Also, if you have any questions or feedback about promptwizard, please feel free to contact us at info@leniolabs.com. We'd love to hear from you!
promptwizard is crafted with love by [Leniolabs](https://www.leniolabs.com/) and a growing community of contributors. We build digital experiences with your ideas. [Get in touch](https://www.leniolabs.com/services/team-augmentation/?utm_source=promptwizard&utm_medium=banner&utm_campaign=leniolabs&utm_content=promptwizard_github)! Also, if you have any questions or feedback about promptwizard, please feel free to contact us at info@leniolabs.com. We'd love to hear from you!
3 changes: 2 additions & 1 deletion promptwizard/evals/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@
from .code_generation import *
from .json_validation import *
from .semantic_similarity import *
from .logprobs import *
from .logprobs import *
from .assistants import *
127 changes: 127 additions & 0 deletions promptwizard/evals/assistants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
from ..openai_calls import openai_call
from concurrent.futures import ThreadPoolExecutor
from typing import List
from openai import OpenAI
import httpx

class Assistants:
def __init__(self, test_cases: List[dict], prompts: List[str], model_test: str="gpt-3.5-turbo", model_test_temperature: float=0.6, model_test_max_tokens: int=1000, best_prompts: int=2, timeout: int=10, n_retries: int=5):

"""
Initialize a Includes instance.
Args:
test_cases (list): List of test cases to evaluate.
number_of_prompts (int): Number of prompts to generate and/or test.
model_test (str): The language model used for testing.
model_test_temperature (float): The temperature parameter for the testing model.
model_test_max_tokens (int): The maximum number of tokens allowed for the testing model.
model_generation (str): The language model used for generating prompts.
model_generation_temperature (float): The temperature parameter for the generation model.
prompts (list): List of prompts to evaluate.
best_prompts (int): Number of best prompts to consider.
Note:
The 'system_gen_system_prompt' attribute is predefined within the class constructor.
"""

self.test_cases = test_cases
self.model_test = model_test
self.model_test_temperature = model_test_temperature
self.model_test_max_tokens = model_test_max_tokens
self.system_gen_system_prompt = """Your job is to generate system prompts for GPT, given a description of the use-case and some test cases.
In your generated prompt, you should describe how the AI should behave in plain English. Include what it will see, and what it's allowed to output. Be creative in with prompts to get the best possible results. The AI knows it's an AI -- you don't need to tell it this.
You will be graded based on the performance of your prompt... but don't cheat! You cannot include specifics about the test cases in your prompt. Any prompts with examples will be disqualified.
Specify in the prompts that you generate that they give a step-by-step response.
Most importantly, output NOTHING but the prompt. Do not include anything else in your message."""
self.prompts = prompts
self.best_prompts = best_prompts
self.n_retries = n_retries

self.client = OpenAI(timeout=httpx.Timeout(timeout, read=5.0, write=10.0, connect=3.0))

def test_candidate_prompts(self):
prompt_results = {prompt: {'correct': 0, 'total': 0} for prompt in self.prompts}
results = [{"method": "Assistants"}]

def evaluate_prompt(prompt):
prompt_and_results = [{"prompt": prompt}]
assistant = openai_call.create_assistant(self.model_test, prompt, [{"type": "code_interpreter"}])
thread = openai_call.create_thread()

for test_case in self.test_cases:
message = openai_call.create_message(thread.id, "user", test_case['input'])
run = openai_call.create_run(thread.id, assistant.id)

finished = False
while not finished:
if openai_call.retrieve(thread.id, run.id).status == "completed":
finished = True

thread_messages = openai_call.thread_messages(thread.id)
assistant_messages = []
current_message = None

for message in reversed(thread_messages.data):
if message.role == "assistant":
if current_message is None:
current_message = {"text": message.content[0].text.value}
else:
current_message["text"] += " " + message.content[0].text.value
elif message.role == "user":
if current_message is not None:
assistant_messages.append(current_message)
current_message = None

if current_message is not None:
assistant_messages.append(current_message)

j = 0
while j < len(self.test_cases):
if str(self.test_cases[j]['output']).lower() in assistant_messages[j]['text']:
prompt_results[prompt]['correct'] += 1
prompt_results[prompt]['total'] += 1

prompt_and_results.append({"test": self.test_cases[j]['input'], "answer": assistant_messages[j]['text'], "ideal": self.test_cases[j]['output'], "result": str(self.test_cases[j]['output']).lower() in assistant_messages[j]['text']})
j = j + 1

results.append(prompt_and_results)
prompt_and_results = []
openai_call.delete_assistant(assistant.id)

with ThreadPoolExecutor(max_workers=len(self.prompts)) as executor:
executor.map(evaluate_prompt, self.prompts)

# Calculate and print the percentage of correct answers and average time for each model
best_prompt = self.prompts[0]
best_percentage = 0
data_list = []
for i, prompt in enumerate(self.prompts):
correct = prompt_results[prompt]['correct']
total = prompt_results[prompt]['total']
percentage = (correct / total) * 100
data_list.append({"prompt": prompt, "rating": percentage})
print(f"Prompt {i+1} got {percentage:.2f}% correct.")
if percentage >= best_percentage:
best_percentage = percentage
best_prompt = prompt
sorted_data = sorted(data_list, key=lambda x: x['rating'], reverse=True)
best_prompts = sorted_data[:self.best_prompts]
print(f"The best prompt was '{best_prompt}' with a correctness of {best_percentage:.2f}%.")
sorted_data.append(results)
return sorted_data, best_prompts

def evaluate_optimal_prompt(self):

"""
Evaluate the optimal prompt by testing candidate prompts and selecting the best ones.
Returns:
tuple: A tuple containing the result data, best prompts, cost, input tokens used, and output tokens used.
"""

return self.test_candidate_prompts()
Loading

0 comments on commit 946f2fa

Please sign in to comment.