Building AI-Powered Dream Analysis with Mistral AI in My Rails API

Posted on: April 29, 2026

Pastel is a project I built alone — an API where users log their sleep experiences: dreams, nightmares, lucid sleeps, and the emotional patterns that come with them. Recently, I decided to push it further. What if the app could analyze those dreams for you? What if it could look at the description, the mood, the tags, and tell you something meaningful about what your dream might be saying?

So I built it. Using Mistral AI, background jobs, and a whole lot of defensive coding. Here’s how I did it.

What I Wanted

The flow I had in mind was simple: a user logs a sleep entry with a title, a description, a sleep type (lucid, nightmare, recurring), tags, their current mood, and the intensity of the experience. From all that context, I wanted to return a structured, insightful interpretation — and in their language, not just English.

The hard part wasn’t the API call. The hard part was making it reliable, testable, and production-ready.

The Architecture I Settled On

POST /api/v1/sleeps/:id/analyse


┌──────────────────────┐
│   SleepsController   │  ← I validate status, enqueue the job
│      #analyse        │
└──────────────────────┘

        ▼ (enqueue)
┌──────────────────────┐
│   SleepAnalyseJob    │  ← Background job via Solid Queue
│    perform()         │
└──────────────────────┘

        ▼ (call)
┌──────────────────────┐
│ SleepAnalysisService │  ← I call Mistral, parse the response
│      call()          │
└──────────────────────┘


┌──────────────────────┐
│   Mistral AI API     │  ← magistral-small-2509 model
│  /chat/completions   │
└──────────────────────┘

It’s a classic Rails async pattern. The controller validates and enqueues, the background job orchestrates, and a service object encapsulates the business logic. Let me walk you through each layer.

First Problem: How Do I Track the Analysis Lifecycle?

Dream analysis isn’t instantaneous. I needed a way to track where each request was in its lifecycle. I could have just used a boolean — analyzed or not — but that felt fragile. What if the API times out? What if the job crashes? What if the user spams the endpoint?

So I added a proper three-state enum to the Sleep model:

ANALYSIS_STATUS = {
  not_started: 'not_started',
  in_progress: 'in_progress',
  done: 'done'
}.freeze

enum :analysis_status, ANALYSIS_STATUS, default: 'not_started'

And two explicit state-transition methods so nothing happens by accident:

def mark_as_analysis_not_started
  update(analysis_status: :not_started)
end

def mark_as_analysis_done(analysis)
  update(analysis: analysis, analysis_status: :done)
end

I also encrypted the analysis field alongside the other sensitive data (title, description, current_mood). Dream analysis results are deeply personal — they deserve the same protection as the original dream:

encrypts :title, :description, :current_mood, :analysis

See the Rails Active Record Encryption docs for more on this feature.

Finally, a scope for my dashboard so I can track how many dreams have been analyzed:

scope :ai_analyzed, -> { where.not(analysis: nil).where(analysis_status: :done) }

The Controller: Idempotency Is My Obsession

When I wrote the #analyse action, the first thing I thought about was: what happens if someone calls this twice? Or if a mobile app retries the request because of a slow connection?

def analyse
  if @sleep.analysis_status == 'done' ||
     @sleep.analysis_status == 'in_progress' ||
     @sleep.analysis.present?
    return render json: {
      message: 'Sleep analysis already in progress or done',
      code: 'analysis_already_in_progress_or_done'
    }, status: :ok
  end

  locale = params[:locale] || I18n.default_locale
  SleepAnalyseJob.perform_later(@sleep.id, locale)

  @sleep.analysis_status = :in_progress
  if @sleep.save
    render json: SleepSerializer.render(@sleep, view: :update_and_show)
  else
    render json: { message: 'Failed to start sleep analysis', code: 'analysis_start_failed' },
           status: :unprocessable_content
  end
end

Three decisions here that I’m proud of:

  1. Idempotency everywhere. I check the status enum AND the presence of an existing analysis. The endpoint can be called a hundred times — only the first one does anything.
  2. Locale passthrough. The locale travels from the request, through the job, into the service, and directly into the Mistral prompt. The AI responds in the user’s language.
  3. Optimistic status update. I set the status to in_progress immediately. If a second request arrives while the job is running, it gets rejected cleanly.

Solid Queue for Background Processing

I integrated Solid Queue — Rails’ built-in Active Job backend — to handle dream analysis asynchronously. Making an AI API call synchronously in a controller request? That’s a terrible user experience.

class SleepAnalyseJob < ApplicationJob
  queue_as :sleep_analysis

  def perform(sleep_id, locale)
    sleep = Sleep.find_by(id: sleep_id)
    return if sleep.blank?
    return if sleep.analysis_status == 'done' || sleep.analysis.present?

    SleepAnalysisService.call(sleep, locale)
  rescue StandardError => e
    Rails.logger.error("Error analyzing sleep with ID: #{sleep_id} - #{e.message}")
    sleep.presence&.mark_as_analysis_not_started
    raise
  end
end

Three layers of defense in this job:

  1. find_by instead of find. If the sleep was deleted between the controller and the job execution, I get nil instead of a crash.
  2. Double-check idempotency. Race conditions are real. The controller set the status to in_progress, but I verify again before calling the service.
  3. Graceful recovery. If something explodes — API down, network issue, whatever — I reset the status to not_started so the user can retry. Then I re-raise so the job framework handles its retry logic.

I also put it on a dedicated :sleep_analysis queue. If my app grows and I have other background work, I can control concurrency and priority per queue. Check out the Solid Queue documentation for configuration details.

The Service: My Conversation with Mistral

This is the part I was most excited about — and the most nervous about. I decided to use Net::HTTP from Ruby’s stdlib instead of pulling in another gem. For a single API integration, I didn’t want the dependency overhead of Faraday or HTTParty.

class SleepAnalysisService < ApplicationService
  def initialize(sleep, locale)
    super()
    @mistral_api_key = ENV.fetch('MISTRAL_API_KEY', nil)
    @mistral_api_url = 'https://api.mistral.ai/v1/chat/completions'
    @sleep = sleep
    @locale = locale
  end

  def call
    response = send_request_to_mistral
    result = parse_response(response) if response.present?
    update_sleep_with_analysis(result) if result.present?
  end

Simple entry point: send, parse, persist. Each step checks the previous one.

Why Mistral and Which Model

I chose Mistral’s magistral-small-2509 model. It’s fast, cost-effective, and produces quality responses for creative and interpretive tasks. The payload follows their chat completions API:

def build_payload
  {
    model: 'magistral-small-2509',
    response_format: { type: 'text' },
    messages: [
      { role: 'system', content: system_prompt },
      { role: 'user', content: user_prompt }
    ]
  }.to_json
end

Discover the full Mistral API documentation for endpoints, authentication, and more.

The Hardest Part: Prompt Engineering

I’ll be honest — I rewrote the system prompt at least five times. The first version produced responses that were too generic. The second was too clinical. I had to find a balance: insightful without being mystical, personal without being presumptuous.

Here’s what I ended up with:

def system_prompt
  <<~PROMPT
    You are an expert dream analyst combining psychology, symbolism, and emotional intelligence.
    Your role is to provide structured, insightful dream interpretations that feel personal and meaningful.

    When analyzing a dream, you must:
    - Identify the core narrative and dominant themes
    - Decode symbols and archetypes present in the dream
    - Connect the emotional tone (mood + intensity) to the dream's meaning
    - Consider the sleep type (lucid, nightmare, recurring, etc.) as interpretive context
    - Use the tags as thematic anchors to deepen the analysis
    - Reference the timing ("when") only if it adds contextual relevance

    Structure your response EXACTLY (adapt title to the language
    corresponding to this locale: #{locale}) as follows:

    ##### 🌙 Global interpretation
    A 3-4 sentence synthesis of the dream's overall meaning.

    ##### 🎭 Theme and meaning
    Identify 2-3 key symbols or themes and explain their psychological significance.

    ##### 💭 Emotional dimension
    Analyze how the mood and intensity level shape the dream's message.

    ##### 🔮 Points to consider
    2-3 open-ended questions or reflections.

    Keep the tone warm, insightful, and grounded.
    Respond in the language corresponding to this locale: #{locale}.
  PROMPT
end

What I learned from this:

The user prompt is simpler — I just assemble all the sleep data into a coherent request:

def user_prompt
  <<~PROMPT
    Please analyze this dream and provide a structured interpretation:

    **Title:** #{sleep.title}
    **Type:** #{sleep.sleep_type}
    **When:** #{sleep.happened}

    **Description:**
    #{sleep.description}

    **Tags:** #{sleep.tags.pluck(:name).join(', ')}
    **Emotional state (current mood):** #{sleep.current_mood}
    **Dream intensity:** #{sleep.intensity}

    Use all of the above attributes in your analysis. The tags, mood,
    and intensity are especially important.
  PROMPT
end

I make sure to pass every attribute — the title, type, when it happened, the full description, the tags, the mood, and the intensity. I specifically call out that tags, mood, and intensity are important because the AI has a tendency to focus only on the description and ignore the metadata.

The HTTP Request

Nothing fancy here — just Net::HTTP with proper error handling:

def send_request_to_mistral
  uri = URI(mistral_api_url)
  http = Net::HTTP.new(uri.host, uri.port)
  http.use_ssl = true

  request = Net::HTTP::Post.new(uri.path, {
    'Content-Type' => 'application/json',
    'Authorization' => "Bearer #{mistral_api_key}"
  })
  request.body = build_payload
  response = http.request(request)

  if response.is_a?(Net::HTTPSuccess)
    response.body
  else
    Rails.logger.error "Failed to get response from Mistral API: #{response.code}"
    sleep.mark_as_analysis_not_started
    nil
  end
rescue StandardError => e
  Rails.logger.error "Error while sending request to Mistral API: #{e.message}"
  sleep.mark_as_analysis_not_started
  nil
end

See the Ruby Net::HTTP documentation for more on HTTP requests in Ruby.

If anything goes wrong — HTTP error, network failure, whatever — I log it, reset the status, and return nil. The caller handles the nil gracefully.

Parsing the Response

def parse_response(raw_response)
  response = JSON.parse(raw_response)
  return response['choices'].first['message']['content'][1]['text'] if response.present?
  nil
end

I’ll be honest — this parsing is fragile. The path to the text content depends on Mistral’s response structure. If they change their API, this breaks. In a production system at scale, I’d use dig or a response schema validator. For now, it works, and I know exactly where to look if it breaks.

Testing: Because I Don’t Want Surprise API Bills

Testing AI integrations is tricky. I don’t want to hit the real Mistral API on every test run — that costs money, it’s slow, and it’s flaky. But I need confidence that my code handles real responses correctly.

VCR to the Rescue

I set up VCR to record real API responses and replay them deterministically:

# spec/support/vcr.rb
VCR.configure do |config|
  config.cassette_library_dir = 'spec/fixtures/vcr_cassettes'
  config.hook_into :webmock
  config.configure_rspec_metadata!
  config.filter_sensitive_data('<MISTRAL_API_KEY>') { ENV['MISTRAL_API_KEY'] }
end

That filter_sensitive_data line is crucial — it replaces my actual API key in the cassette files so I never accidentally commit it. Combined with WebMock, VCR intercepts all HTTP requests during tests.

Then in my service spec:

describe SleepAnalysisService do
  let(:sleep) { create(:sleep, :with_tags) }
  let(:locale) { 'en' }

  describe '#call' do
    it 'analyzes the sleep and persists the result' do
      VCR.use_cassette('sleep_analysis_service/success') do
        service = described_class.call(sleep, locale)

        expect(sleep.reload.analysis).to be_present
        expect(sleep.analysis_status).to eq('done')
      end
    end
  end
end

The first time I run this, VCR hits the real Mistral API and records the response. Every time after that, it replays the cassette. Fast, deterministic, free.

Testing the Job

The job spec verifies idempotency and error recovery:

describe SleepAnalyseJob do
  let(:sleep) { create(:sleep) }

  it 'enqueues the job' do
    expect {
      described_class.perform_later(sleep.id, 'en')
    }.to have_enqueued_job.on_queue('sleep_analysis')
  end

  it 'does not analyze if already done' do
    sleep.mark_as_analysis_done('existing analysis')
    expect(SleepAnalysisService).not_to receive(:call)
    described_class.perform_now(sleep.id, 'en')
  end
end

Testing the Endpoint

And at the request level, I verify the idempotency guard actually works:

describe 'POST /api/v1/sleeps/:id/analyse' do
  context 'when analysis is already done' do
    before { sleep.mark_as_analysis_done('analysis text') }

    it 'returns ok with appropriate message' do
      post analyse_api_v1_sleep_path(sleep)
      expect(response).to have_http_status(:ok)
      expect(json_response['code']).to eq('analysis_already_in_progress_or_done')
    end
  end
end

What I Learned Along the Way

Prompt Engineering Is Iterative

I can’t stress this enough. The system prompt went through at least five revisions. Each one got better, but I had to actually read the AI outputs and notice patterns: too vague, too clinical, ignoring the tags, wrong language. Only by iterating on real outputs did I get something that felt right.

Status Machines Save You From Yourself

The three-state enum combined with double-checks at both the controller and job level — that saved me from bugs I didn’t even know I was going to have. Race conditions, retries, duplicate requests — they all get handled gracefully because I thought about failure modes upfront.

VCR Is Worth the Setup Time

Recording API responses once and replaying them forever is a superpower. My test suite runs fast, I never worry about API rate limits, and I’m not burning through credits on every CI run.

Always Reset on Failure

Early in development, I had a bug where a failed API call left the status stuck at in_progress. The user couldn’t retry. The analysis was orphaned. That’s when I added the mark_as_analysis_not_started calls in every error path. Defensive coding isn’t paranoia — it’s experience.

Less Is More for Dependencies

I could have added Faraday or HTTParty or a dedicated Mistral SDK. Instead, I used Net::HTTP and it works fine. The code is a bit more verbose, but I have one fewer gem to maintain and one fewer dependency to audit.

Final Thoughts

Building this feature was a great exercise in wrapping something inherently unpredictable — an AI API call — in predictable, well-tested infrastructure. The Mistral integration itself is maybe 30% of the work. The other 70% is status tracking, background jobs, error recovery, idempotency, and testing.

That’s the part I’m most proud of: not that I called an AI API, but that I built a system around it that’s resilient enough to run in production without me babysitting it.

If you’re building something similar, my advice is: spend more time on the plumbing than on the prompt. A good prompt with bad error handling will fail silently. A mediocre prompt with great error handling will at least fail gracefully — and you’ll have the logs to fix the prompt later.

You can check out the full implementation in the Pull Request on GitHub.