Security & Data Protection
How do you handle encryption of data at rest and in transit?
-
Data at rest: Utilizing AES-256 encryption, we ensure that stored data remains secure against unauthorized access. Encryption keys are managed through secure key management systems.
-
Data in transit: All data transmitted between systems is protected using TLS 1.2 or higher, ensuring confidentiality and integrity during transfer. This includes HTTPS for web traffic and secure protocols for internal communications.
What are your protocols for securing APIs and web services?
-
Authentication & authorization: OAuth 2.0 and OpenID Connect protocols to ensure secure authentication and authorization processes. This approach allows token-based access control, minimizing the risk of credential compromise.
-
Data encryption: All data transmitted between clients and our APIs is encrypted using Transport Layer Security (TLS) protocols, ensuring confidentiality and integrity during transit.
-
Input validation & sanitization: We validate and sanitize all inputs to our APIs to prevent common vulnerabilities such as SQL injection and cross-site scripting (XSS) attacks.
-
Rate limiting & throttling: To protect against abuse and denial-of-service attacks, we implement rate limiting and throttling mechanisms, controlling the number of requests a client can make within a specified timeframe.
-
Monitoring & logging: Our systems continuously monitor API usage and maintain detailed logs to detect and respond to suspicious activities.
How do you manage user authentication and authorization (e.g. OAuth2, JWT)?
-
OAuth 2.0 & OpenID Connect: OAuth 2.0 for secure authorization, allowing users to grant limited access to their resources without exposing credentials. OpenID Connect extends OAuth 2.0 to include authentication, providing a standardized method to verify user identities.
-
JWT Implementation: Upon successful authentication, we issue JWTs that encapsulate user identity and authorization claims. These tokens are signed using strong cryptographic algorithms (e.g., RS256) to ensure integrity and are validated on each request to protected resources.
-
Token Management: Access tokens are designed with short lifespans to minimize risk, and refresh tokens for session continuity. Token revocation mechanisms are in place to invalidate tokens when necessary, enhancing security.
-
Role-Based Access Control (RBAC): We enforce RBAC to ensure users have access only to the resources necessary for their roles, adhering to the principle of least privilege and zero trust.
-
Security Best Practices: Our implementation aligns with OWASP recommendations, including input validation, secure storage of credentials, and protection against common vulnerabilities such as token replay attacks.
Do you conduct regular security audits or penetration testing?
-
Annual Third-Party Penetration Testing: We engage certified external security firms to perform comprehensive penetration tests at least once a year. These assessments simulate real-world attack scenarios to identify and remediate vulnerabilities, aligning with industry standards such as the OWASP Testing Guide and NIST SP 800-115.
-
Monthly Automated Vulnerability Scanning: We utilize automated tools to conduct monthly scans of our applications and infrastructure. This approach allows us to detect and address potential security issues.
-
Continuous Monitoring and External Assessments: Beyond scheduled testing, we collaborate with external security experts to perform periodic assessments and ensure compliance with best practices.
How is access to production environments controlled and monitored?
-
Role-Based Access Control (RBAC): We implement RBAC to ensure that users have access only to the resources necessary for their roles, adhering to the principle of least privilege and zero trust.
-
Multi-Factor Authentication (MFA): All access to production systems requires MFA, adding a layer of security beyond traditional password-based authentication.
-
Privileged Access Management (PAM): We employ PAM solutions to manage and monitor privileged accounts, ensuring that elevated access is granted appropriately and audited regularly.
-
Centralized Logging: All access and activity logs are centralized, enabling efficient monitoring, analysis, and auditing of actions within the production environment.
-
Real-Time Monitoring: We utilize real-time monitoring tools to detect and respond to unauthorized access attempts or anomalous behavior.
-
Regular Audits: Periodic security audits are conducted to assess access controls and ensure compliance with established security policies and standards.
Compliance & GDPR
Are you familiar with GDPR requirements, and how do you ensure compliance?
-
Data Mapping & Inventory: We conduct thorough audits to catalog all personal data processed, detailing data types, processing purposes, storage locations, access permissions, and retention periods.
-
Lawful Basis for Processing: Each data processing activity is grounded in a legitimate legal basis, such as consent, contractual necessity, or legitimate interests, as stipulated in Article 6 of the GDPR.
-
Privacy Policies & Transparency: Our privacy policies are crafted to be clear and accessible, outlining data collection practices, processing purposes, data subject rights.
-
Data Subject Rights Management: We have established procedures to facilitate data subject rights, including access, rectification, erasure, restriction, portability, and objection, ensuring responses within the mandated timeframes.
-
Data Processing Agreements (DPAs): We enter into DPAs with all third-party processors, ensuring they adhere to GDPR standards and provide adequate data protection measures.
-
Data Protection Impact Assessments (DPIAs): For processing activities that pose high risks to data subjects’ rights and freedoms, we conduct DPIAs to identify and mitigate potential impacts.
-
Security Measures: We implement appropriate technical and organizational measures, such as encryption, access controls, and regular security assessments, to safeguard personal data against unauthorized access, alteration, or destruction.
-
Data Breach Response: In the event of a data breach, we have protocols to notify the relevant supervisory authority within 72 hours and communicate with affected data subjects when required.
Can you guarantee that personal data will be stored and processed within the EU or in GDPR-compliant regions?
For EU
To comply with the General Data Protection Regulation (GDPR), we will:
-
Data Storage and Processing: Utilize cloud services that offer EU-based data centers, such as Microsoft Azure, Amazon Web Services (AWS), or Google Cloud Platform (GCP), ensuring that all personal data of EU users is stored and processed within the EU.
-
Data Transfer Restrictions: Avoid transferring personal data outside the EU unless necessary. If transfers are required, we will implement appropriate safeguards, such as Standard Contractual Clauses (SCCs) or rely on the EU-U.S. Data Privacy Framework (DPF) for transfers to certified U.S. entities.
-
Compliance Measures: Implement data minimization, purpose limitation, and obtain explicit consent for data processing activities, as mandated by GDPR.
For USA
For the United States companies, we will:
-
Data Storage and Processing: Leverage U.S.-based data centers provided by cloud services like AWS, Azure, or GCP to store and process personal data domestically.
-
Privacy Compliance: Adhere to applicable U.S. data privacy laws, such as the California Consumer Privacy Act (CCPA), ensuring transparent data practices and user rights.
-
Data Segregation: Maintain separate data storage environments for U.S. users to prevent cross-border data flow, unless explicitly consented to by the user.
Do you have a Data Protection Officer (DPO) or someone responsible for compliance?
As a resident of Diia.City and a member of the Lviv IT Cluster, our company is subject to annual compliance audits conducted by certified external auditors. These audits assess adherence to various legal obligations, including data protection practices. While we have not formally appointed a DPO, our legal compliance is regularly reviewed through these audits, ensuring that our data protection measures align with national and international standards.
How do you handle data subject rights (e.g., deletion, access requests)?
-
Right to Be Informed: We provide clear and transparent information about the collection and use of personal data through our privacy notices.
-
Right of Access: Individuals can request access to their data, and we respond within one month, providing the necessary information as required by GDPR.
-
Right to Rectification: We allow data subjects to request corrections to inaccurate or incomplete personal data.
-
Right to Erasure: Also known as the ‘right to be forgotten,’ individuals can request the deletion of their data under certain conditions.
-
Right to Restrict Processing: Data subjects can request the restriction of processing their data in specific circumstances.
-
Right to Data Portability: We provide personal data in a structured, commonly used, and machine-readable format upon request.
-
Right to Object: Individuals can object to the processing of their data based on legitimate interests or for direct marketing purposes.
-
Rights Related to Automated Decision-Making: We do not engage in automated decision-making that produces legal effects concerning individuals.
Can you sign a Data Processing Agreement (DPA) with us?
As a company operating within the Diia.City and a member of the IT Cluster, we adhere to stringent data protection standards. Our operations are subject to regular compliance audits, ensuring alignment with both Ukrainian legislation and international data protection regulations, including the General Data Protection Regulation (GDPR).
We are open to reviewing and signing your DPA template or can provide our standard agreement for your consideration. Please share your preferred version, and we will proceed accordingly.
Development Practices
What secure coding practices do you follow (e.g. OWASP Top 10)?
Our development process incorporates the following key practices:
-
Input Validation: All inputs are validated on the server side to prevent injection attacks and ensure data integrity.
-
Output Encoding: We encode outputs to prevent cross-site scripting (XSS) vulnerabilities.
-
Authentication and Password Management: We implement strong authentication mechanisms and securely manage passwords, including hashing and salting.
-
Session Management: Sessions are securely managed with appropriate timeouts and regeneration to prevent hijacking.
-
Access Control: We enforce role-based access controls to ensure users have appropriate permissions.
-
Cryptographic Practices: Sensitive data is protected using strong encryption standards, and cryptographic keys are securely managed.
-
Error Handling and Logging: We handle errors gracefully without exposing sensitive information and maintain logs for auditing purposes.
-
Data Protection: Personal and sensitive data are handled in compliance with data protection regulations, ensuring confidentiality and integrity.
-
Communication Security: Data in transit is secured using protocols like TLS to prevent eavesdropping and tampering.
-
System Configuration: We maintain secure configurations for all systems, disabling unnecessary services and applying security patches.
What frameworks and libraries do you use in your stack? How do you vet third-party dependencies?
At West Solutions, we employ a modern and secure technology stack, ensuring both performance and compliance.
Web Development Services
Third-Party Dependency Vetting
We prioritize the security and reliability of third-party dependencies through the following practices:
-
Automated Vulnerability Scanning: Utilizing tools to detect known vulnerabilities in dependencies.
-
Regular Updates: Keeping all libraries and frameworks up-to-date to incorporate the latest security patches.
-
Manual Reviews: Conducting code reviews for critical dependencies to assess code quality and security.
-
Community Trust: Preferring widely adopted libraries with active maintenance and strong community support.
-
License Compliance: Ensuring all third-party components comply with licensing requirements to avoid legal issues.
What is your CI/CD process, and how do you handle rollbacks and canary deployments?
CI/CD Pipeline Overview
Our CI/CD process is structured into distinct stages, each designed to maintain code integrity and facilitate seamless deployments:
Build Stage:
- Utilize Docker containers to create environment-agnostic builds.
- Compile assets, perform static code analysis, and generate artifacts.
Test Stage:
- Execute automated unit and integration tests using frameworks like PHPUnit and Jest.
- Implement security scans with tools to detect vulnerabilities.
Deploy Stage:
- Deploy to staging environments for user acceptance testing.
- Upon approval, promote builds to production using GitLab CI/CD pipelines.
Rollback Mechanisms
To ensure rapid recovery from potential issues, we have established comprehensive rollback strategies:
- Versioned Deployments:
Tag each release in Git, allowing easy reversion to previous stable states.
- Artifact Retention:
Store build artifacts in GitLab’s package registry, facilitating the redeployment of prior versions.
- Automated Rollbacks:
Integrate health checks post-deployment; if anomalies are detected, automated scripts trigger a rollback to the last known good configuration.
Canary Deployments
For critical applications, we employ canary deployment strategies to minimize risk:
- Incremental Rollouts:
Deploy new versions to a subset of users or servers, monitoring performance before full-scale release.
- Monitoring and Metrics:
Leverage Prometheus and Grafana to track key performance indicators (KPIs) during canary phases.
- Traffic Routing:
Utilize Kubernetes Ingress controllers or service meshes like Istio to manage traffic distribution between canary and stable versions.
Do you use feature flags or A/B testing frameworks? How do you manage those securely?
Feature Flags: Implementation & Security
-
Centralized Management: Feature flags are managed through a centralized system, allowing for consistent control and auditing.
-
Short-lived Flags: We ensure feature flags are temporary and remove them once the associated feature is fully deployed, reducing code complexity.
-
Access Controls: Strict access controls are in place to prevent unauthorized modifications to feature flags.
-
Audit Logging: All changes to feature flags are logged for accountability and compliance purposes.
-
Secure Evaluation: Feature flag evaluations are performed server-side to protect sensitive information and maintain application integrity.
A/B Testing Frameworks: Tools & Security Measures
-
Tool Selection: We choose A/B testing tools that offer robust security features, including data encryption and compliance with regulations like GDPR and CCPA.
-
Data Privacy: User data is anonymized and handled by privacy laws to protect user identities during testing.
-
Monitoring & Analysis: We continuously monitor test performance and analyze results to make informed decisions while safeguarding user data
AI & LLM Integration
What experience do you have integrating AI or LLM-based systems into production apps?
We have hands-on experience designing, integrating, and deploying AI/LLM-driven chat and content systems in production environments. Our work spans real-time interaction interfaces, backend, and secure API-layer communication with hosted or self-managed large language models.
Key Highlights:
-
Integrated via OpenAI API for live user interaction
-
Connected custom knowledge bases via REST APIs and scheduled sync jobs
-
Designed stateless and stateful chat workflows with user feedback loops
-
Used CRON and API bridges to refresh the model context with updated content
-
Managed API key security, rate limiting, and privacy per GDPR principles
-
Supported both hosted and on-prem inference setups
-
Enabled traceable interaction logging and analytics for response optimization
How do you manage prompt injection and data leakage risks with LLMs?
Prompt Injection Mitigation
-
Input Sanitization:
All user inputs are sanitized using strict allowlists and escaped to remove special characters or tokens that could manipulate prompt context.
-
Prompt Templates:
We use well-structured prompt templates with static instruction blocks and clearly bounded user input areas. This ensures user content cannot override system behavior.
-
System/User Segregation:
Clear separation of system-level instructions from user-level content using delimiters or structured JSON formats to reduce the injection surface.
-
Content Filtering:
We scan both inputs and outputs using keyword-based and AI-assisted filters to block or redact sensitive terms, jailbreak attempts, or malicious payloads.
Data Leakage Protection
-
Minimal Context Principle:
Only relevant, non-sensitive data is passed to LLMs. PII and business-sensitive inputs are explicitly excluded.
-
Access Control:
Access to prompt construction logic, logs, and model responses is restricted and audited, especially in multi-tenant or enterprise setups.
-
API Key Security:
LLM API keys are stored in encrypted secret managers (e.g. Vault, AWS Secrets Manager) and rotated regularly.
-
Output Monitoring:
Responses from the LLM are post-processed and filtered before delivery to end users to catch hallucinations or confidential data leaks.
-
Rule-based & keyword filters catch profanity, sensitive terms, and format issues.
-
AI moderation tools (e.g. OpenAI Moderation) scan for toxicity or unsafe content.
-
-
Hosted vs On-Prem Decisioning:
For high-risk data contexts, we prefer on-prem or private-hosted models, avoiding external LLM APIs entirely.
Where do you host inference models (on-prem, EU cloud, U.S.-based)?
Default setups use EU-based cloud providers (e.g. AWS Frankfurt, Azure EU, or Hetzner) to meet GDPR and data residency requirements.
-
For projects involving sensitive or regulated data, we offer on-premise or private-cloud inference (e.g. with LLaMA, Mistral) using Dockerized GPU nodes.
U.S.-based APIs (e.g. OpenAI, Anthropic) are used only when data exposure is low-risk and explicit consent is given.
This allows us to balance performance, cost, and compliance per project needs.
Do you fine-tune or use hosted APIs (e.g. OpenAI, HuggingFace)? How do you secure API keys?
We primarily use hosted APIs from providers like OpenAI, Anthropic, and selectively support fine-tuning when use cases demand domain-specific performance.
API Use & Fine-Tuning
-
Default approach: Hosted APIs with prompt engineering for fast iteration and cost efficiency.
-
Fine-tuning: Applied only for narrow, high-volume tasks (e.g. structured Q&A, code generation), usually on smaller open-source models .
API Key Security
-
Environment Variables: Keys are never hardcoded—managed securely via .env or deployment platforms.
-
Secret Managers: In production, we use tools like AWS Secrets Manager, Vault, or GitLab CI/CD encrypted variables.
-
Access Control: Limited to specific services, with logging and key rotation enforced quarterly or upon personnel change.
-
Rate Limits & Scopes: Keys are scoped and rate-limited to minimize abuse and limit blast radius.
Hosting & Infrastructure
Cloud Providers and Data Center Locations We Use for Hosting & Compliance
Cloud Providers We Use
-
Amazon Web Services (AWS) — typically hosted in Frankfurt (eu-central-1) or Ireland (eu-west-1) for GDPR alignment.
-
Google Cloud Platform (GCP) — used for compute-heavy workloads, with EU region prioritization.
-
Hetzner — for clients preferring bare-metal performance in Germany or Finland.
-
DigitalOcean — for lightweight deployments, typically in AMS3 (Amsterdam) or FRA1 (Frankfurt).
-
Cloudways (via DigitalOcean/Vultr/Linode) — often used for managed WordPress hosting in EU-based zones.
How do you manage backups and disaster recovery? Are backups encrypted?
Backup Strategy
-
Automated Daily Backups: We perform full and incremental backups of critical data and application states on a daily schedule.
-
Retention Policy: Backup snapshots are retained for 7 to 30 days, depending on project SLA, with long-term archiving available for enterprise clients.
-
Multi-Region Storage: Backups are stored across redundant data centers (within the EU by default), ensuring resilience against local failures.
Encryption & Security
-
At Rest: All backups are encrypted using AES-256 encryption before storage.
-
In Transit: Backups are transferred over secure channels (TLS 1.2+).
-
Access Control: Access to backups is tightly restricted and logged, with MFA required for restores.
-
Integrity Checks: Hash validation ensures backup data integrity before and after restore.
Disaster Recovery
-
RPO & RTO Targets: Typical RPO (Recovery Point Objective) is under 24 hours, and RTO (Recovery Time Objective) ranges from 1–4 hours depending on the system tier.
-
Restore Testing: We conduct regular DR drills and restore verifications (monthly or quarterly depending on criticality).
-
Failover Options: For critical apps, we support DNS-based or load balancer-based automatic failover between regions.
Business Continuity & Trust
What is your plan for maintaining service in the event of infrastructure outages or regional disruptions?
Multi-Region & Multi-Cloud Redundancy
-
Geographically Distributed Deployments: We deploy applications across multiple cloud providers (e.g., AWS, GCP, Hetzner) and regions (e.g., Germany, Finland, Ireland) to mitigate the risk of regional failures.
-
Active-Passive and Active-Active Configurations: Depending on the criticality of the application, we utilize active-passive setups for cost efficiency or active-active configurations for high availability.
Automated Failover & Recovery
-
Infrastructure as Code (IaC): Using tools, we automate the provisioning and recovery of infrastructure, ensuring rapid deployment in alternate regions when needed.
-
Continuous Data Replication: We employ real-time data replication strategies to ensure data consistency across regions, minimizing data loss during failovers.
Defined RTO and RPO Metrics
-
Recovery Time Objective (RTO): We aim for an RTO of under 4 hours for critical systems, ensuring minimal downtime.
-
Recovery Point Objective (RPO): Our RPO targets are set to under 1 hour, reducing potential data loss in disaster scenarios.
Regular Testing and Validation
-
Disaster Recovery Drills: We conduct quarterly DR drills, including simulated regional outages, to test the effectiveness of our recovery plans.
-
Plan Reviews and Updates: Post-drill analyses are performed to identify gaps, and recovery plans are updated accordingly to adapt to evolving infrastructure and threat landscapes.
Documentation and Communication
-
Comprehensive DR Documentation: All disaster recovery procedures are thoroughly documented, including step-by-step recovery processes and contact lists.
-
Stakeholder Communication Plans: We maintain clear communication protocols to keep stakeholders informed during disruptions, ensuring transparency and coordinated response efforts.
Do you have liability insurance or legal protections in place for cross-border cooperation?
Yes, we maintain comprehensive liability insurance and adhere to legal frameworks facilitating cross-border operations between the EU and Ukraine. Our company is a registered member of Ukraine’s IT Cluster and operates under the Diia City legal regime, ensuring compliance with both Ukrainian and EU regulations. We are prepared to provide documentation upon request to support due diligence processes.
Are your team members permanent staff or freelancers? Where are they based?
Our core team comprises permanent staff members based in Ukraine, Poland, and Croatia. This stable team structure ensures consistent quality and accountability across projects.
How do you ensure code quality and consistency with distributed teams?
We uphold high code quality and consistency across our distributed teams through standardized coding guidelines, rigorous code reviews, and automated testing. Utilizing continuous integration and deployment (CI/CD) pipelines, we ensure that every code change is systematically tested and integrated. Regular team syncs and clear documentation further facilitate alignment and maintain our development standards.