Internal Server Error

Discussions

Ask a Question
Back to All

Internal Server Error

I keep getting 500 Internal Server Error whenever I try to train my LLM model on a large dataset. Can anyone help me out on this. The code for reference:

from gradientai import Gradient

def create_instruction(inst, paragraph, response):
return {"inputs": f"### Instruction: {inst} {paragraph} \n\n### Response: {response}"}

gradient = Gradient()

base_model = gradient.get_base_model(base_model_slug="nous-hermes2")

sample_query = "### Instruction: You are an Earnings Calls Analyzer, you read the text given by the user in an analytical manner and understand the major topics talked about in that text. Read the paragraph provided and give me the top 5 topics of 1 or 2 words max: According to the Nielsen reports for the 13 weeks through October 23, 2021 all outlets combined namely convenience, grocery, drug, mass merchandisers, sales in dollars in the energy drink category, including energy shots increased by 12.8% versus the same period a year ago. Sales of the Company's energy brands including Reign were up 7.1% in the 13-week period. Sales of Monster were up 10.7%. Sales of Reign were down 7.6%. Sales of NOS decreased 18.3% and sales of Full Throttle increased 8.9%. It is important to note that with regard to the decrease in sales of NOS during the third quarter we experienced shortages in the supply of concentrate for NOS, which resulted in reduced production, reduced sales and lack of product availability at retail.? \n\n ### Response:"

new_model_adapter = base_model.create_model_adapter(
name="ECA_Model_New"
)
print(f"Created model adapter with id {new_model_adapter.id}")

inst = "You are an Earnings Calls Analyzer, you read the text given by the user in an analytical manner and understand the major topics talked about in that text. I do not want any other text except the list in square brackets. Read the paragraph provided and give me the top 5 topics talked about in the paragraph. I dont want the exact words but the summarized meaning.Keep the topics to 1 or 2 words max:"

samples = [
create_instruction(inst, df['Paragraph'][i], df['Processed_Response'][i]) for i in range(50)
]

num_epochs = 1
count = 0
while count < num_epochs:
print(f"Fine-tuning the model with iteration {count + 1}")
new_model_adapter.fine_tune(samples=samples)
count += 1

# After fine-tuning

completion = new_model_adapter.complete(query=sample_query, max_generated_token_count=100).generated_output
print(f"Generated(after fine-tuning): {completion}")

from gradientai import Gradient

def create_and_fine_tune_model(gradient, new_model_adapter, inst, df, start, end, sample_query, num_epochs=1):
#new_model_adapter = None
for i in range(start, end + 1, 30):
new_model = gradient.get_model_adapter(model_adapter_id=new_model_adapter.id)
print(f"Created model adapter with id {new_model.id}")

    samples = [create_instruction(inst, df['Paragraph'][j], df['Processed_Response'][j]) for j in range(i, min(i + 30, end))]

    count = 0
    while count < num_epochs:
        print(f"Fine-tuning the model with iteration {count + 1}")
        new_model.fine_tune(samples=samples)
        count += 1

    completion = new_model.complete(query=sample_query, max_generated_token_count=100).generated_output
    print(f"Generated(after fine-tuning): {completion}")

    new_model_adapter = new_model  # Save the last model for the next iteration

create_and_fine_tune_model(gradient, new_model_adapter, inst, df, 51, 1039, sample_query)

gradient.close()

What changes should I do in the code to tackle the error