SneakPeek- An AI-based Platform for Predicting User Preferences

Category:  Social Networking

Services: Gen AI Development, Architecture Design and Review, Managed Engineering Teams

SneakPeek CS
  • 15% increase in generative AI-based personalization
  • 20% reduction in generative AI operational costs.
  • 25% increase in platform’s operational efficiency

About SneakPeek

Sneak Peek is a location-based social networking platform that allows users to interact with their local community, discover new places, and follow others in their area. It also offers customized content, location-specific tags, and filters. Brands can leverage Sneak Peek to connect with their local audiences.

Sneak Peek wanted to build a generative AI platform that generated personalized user content based on preferences, location history, and engagement patterns.

Challenges

  • SneakPeek’s AI-based platform is needed to scale machine learning models to handle increasing data and user requests without degrading performance.
  • Developing AI algorithms that provide accurate recommendations and are free from biases was complex for SneakPeak.
  • It was crucial for SneakPeek that the AI-generated content aligns with real-world experiences and remains trustworthy, which is crucial for user engagement.
  • Implementing robust data protection measures and transparent policies will be vital to maintaining user trust and complying with regulations.
  • Obtaining high-quality and unbiased location-based training data at scale to train generative AI modules posed a significant challenge

Solutions

  • Simform used Amazon SageMaker to train, tune, and deploy machine learning models at scale and fully manage the ML lifecycle.
  • Further, we leveraged Amazon EKS to run SageMaker models as microservices and auto-scale based on demand.
  • Our team used Amazon SQS to queue requests to ML models, handling spikes and ensuring no requests were lost.
  • We also implemented MLOps with AWS CodeBuild and SageMaker Pipelines for robust model testing, validation, and monitoring.
  • Our team set up a secure and compliant environment using AWS Secrets Manager and Amazon RDS PostgreSQL for the training data.
  • Further, we validated AI models continuously against real-world data using Amazon SageMaker Model Monitor.
  • Our team monitored data quality using CloudWatch Metrics and set alerts on metrics like bias and drift.

Key metrics

  • A 15% increase in generative AI platform performance and personalizations due to efficient scaling and auto-scaling with Amazon EKS and Amazon SageMaker.
  • 20% cost reduction for generative AI platform due to optimized resource utilization and cost monitoring with AWS Cost Explorer.
  • 25% increase in the platform’s operational efficiency due to automated build, test, and deployment processes leveraging AWS CodeBuild.

Architecture Overview

GenAI SneakPeek Architecture Diagram
  • Amazon SageMaker– We used Amazon SageMaker to deploy AI models with scalable infrastructure and enable seamless integration.
  • Amazon EKS- Our team utilized Amazon EKS to deploy SageMaker models as microservices and auto-scale based on demand.
  • Amazon Simple Queue Service (SQS)-We utilized Amazon SQS to queue requests to our ML models, ensuring that no requests were lost, even during spikes.
  • AWS CodeBuild: Our team used AWS CodeBuild to continuously integrate and deliver ML models (CI/CD). 
  • Amazon RDS for PostgreSQL: We used Amazon RDS to store and manage the training data.
  • AWS Secrets Manager: Our team used it for securely storing and the credentials for accessing the training data. 
  • Amazon ECR: We leveraged Amazon ECR to store and manage the ML models. 
  • Amazon EC2: Our team used Amazon EC2 to host the ML models. 
  • Amazon Route 53: We used Amazon Route 53 to route traffic to the ML models. 
  • AWS Cost Explorer: We leveraged AWS Cost Explorer to monitor and manage the costs associated with the ML models. 
  • Amazon Pinpoint: Our team uses it to send notifications about the ML models. 
  • Amazon CloudWatch: We leveraged Amazon CloudWatch used to monitor the ML models’ performance. 
  • Amazon Virtual Private Cloud (VPC): Our team used this to provide a private network for the ML models. 

Related Case Studies

ONA dating - case study
Freewire - case study

Speak to our experts to unlock the value of Cloud!