` Career Opportunities | Cloud Collab Technologies

Career Opportunities

CloudCollab is helping companies build the future of Telecommunications. We invite you to be part of this journey by joining our dynamic team in taking the world there.

A developer is responsible for several Java-related duties throughout the software development lifecycle, from concept and design to testing. The developer is required to create user information solutions through the development, implementation, and maintenance of Java-based components and interfaces.

Position Requirements:


  • As a technical team member work with the Offshore / US project team / client on engagements focussed on Oracle EBS / Cloud application.
  • Understand customer business processes / Functional Specification and be able to prepare technical design.
  • Provide Technical support for Oracle Integration Cloud, data conversion and reports.
  • Develop and unit test technical components as per standards.
  • Participate in automation and digital transformation activities within or outside the client projects.

Desired Knowledge:


  • Good knowledge of modules and processes around Oracle Finance/SCM application.
  • End to end implementation experience in oracle cloud / ERP applications.
  • Excellent communication skills and ability to interact with external teams or clients.
  • Experience in working with client / USA counterparts in understanding their business requirements and providing the right solutions.

Must have Skills:


Candidate should possess strong knowledge in below 3 or more areas:

  • SQL and Pl/SQL
  • Data migration using FBDI
  • Oracle SaaS BI / OTBI / FR reports
  • Cloud Integration ( ICS / OIC)
  • Oracle VBCS / APEX

Good to have skills:

  • Knowledge on emerging technologies like ( RPA, IoT, Blockchain)

QA Automation engineer who understands basics of DevOps [pipeline structures, Jenkins]

Key Skills:


  • Proficiency in Java, with a good understanding of its ecosystems
  • Expert in Selenium
  • Expert in REST Assured

NOTE: Don’t require a framework as we’re using TAP.

Key Skills:


  • Frontend: HTML(Bootstrap 4 & 5), CSS/JS mandatory , Angular/React optional.
  • Backend: Python with Flask framework is mandatory.
  • DevOps: Basic knowledge of cloud and infra services with CI/CD & deployment pipelines.

Key Skills:


  • iOS development
  • Swift
  • MVVM Architecture
  • WKWebView
  • UIKit
  • Auto Layout, Xib

Skill Set:


  • Extensive experience in REST API, Spring MVC/Boot, restful/Soap web services with good knowledge in core Java collections.
  • Exposure to DB table design and any one JPA like Spring Data or Hibernate is mandatory.
  • Additional knowledge/ experience in front end technologies- JavaScript, HTML, AngularJS framework (would be preferred but not mandatory)

Job Requirements:


  • Develop high performance cloud native Java microservices.
  • Mentor/guide junior developers in completing their stories.
  • Working closely with Business Analysts, end users and architects to build flexible enterprise software.

Preferred Skills:


  • Hands on in containers (docker)
  • Hands on in orchestration (Kubernetes)
  • Hands in CI/CD tools (Jenkins, groovy, git)
  • Hands in Monitoring tools (Prometheus, Grafana, Kibana and Elastic)
  • Hands on in Chef and Ansible
  • Scripting - Bash/ python
  • Knowledge on GKE or AKS
  • AWS / GCP preferred for cloud

SRE 1


  • Hands on in containers (docker)
  • Hands on in orchestration (Kubernetes)
  • Hands in CI/CD tools (Jenkins, groovy, git)
  • Hands in Monitoring tools (Prometheus, Grafana, Kibana and Elastic)
  • Hands on in Chef and Ansible
  • Scripting - Bash/ python
  • Knowledge on GKE or AKS
  • AWS / GCP preferred for cloud
  • Good troubleshooting skills

Job description:


  • At least 4+ years’ experience working through the design, development, testing, release cycle, and delivering software products.
  • Good hands-on experience with AngularJS (+ Angular), CSS, and HTML5.
  • Need someone with strong front-end skills and the ability to understand and implement.

Requirements:


  • Angular: 4 years (Required)
  • HTML5, JavaScript, CSS: 5 years (Required)
  • Experience with NgRx
  • Strong technical background with latest and emerging technologies - SaaS product lines
  • Experience working with cross geo teams.
  • Exposure to Telecommunications would be a plus.

As the Security Lead Engineer, you will be responsible for leading and implementing security measures to safeguard our organization's systems, networks, and data. You will play a pivotal role in securing our infrastructure, detecting and mitigating threats, and ensuring compliance with security best practices and standards.

Key Responsibilities:


  • Lead the design, implementation, and maintenance of security measures, including SIEM, network security, vulnerability management, and monitoring solutions.
  • Develop and enforce security policies, procedures, and standards to protect the organization from cyber threats.
  • Collaborate with cross-functional teams to integrate security into the software development lifecycle, network architecture, and cloud environments.
  • Manage and configure SIEM tools (e.g. QRadar, Splunk, Elastic) to collect, analyze, and correlate security data from various sources.
  • Oversee network security measures, including firewalls, intrusion detection/prevention systems, and access control mechanisms (e.g., WAF, IDS/IPS, Tenable.io, Nessus)
  • Conduct vulnerability assessments, penetration tests, and security audits to identify and remediate security weaknesses.
  • Lead incident response and investigation efforts to address security incidents and breaches promptly.
  • Implement and manage security monitoring and alerting systems to detect and respond to security events in real-time.
  • Stay current on emerging threats, vulnerabilities, and security technologies to recommend and implement security enhancements.
  • Mentor and provide guidance to junior security team members.
  • Collaborate with third-party security vendors and assess their services for alignment with organizational security goals.

Qualifications:


  • Bachelor’s degree in computer science, Information Security, or a related field (Master's preferred).
  • 6+ years of experience in information security roles with a focus on SIEM, Network Security, Vulnerability Management, and Monitoring.
  • Proficiency in configuring and managing SIEM solutions (e.g. QRadar, Splunk, Elastic).
  • Strong knowledge of network security principles and best practices.
  • Experience with vulnerability scanning tools (e.g. Tenable.io, Nessus, Qualys) and penetration testing.
  • Familiarity with cloud security concepts and practices (e.g. IBM Cloud, AWS, Azure, GCP).
  • Security certifications such as CISSP, CISM, or GIAC are a plus.
  • Excellent problem-solving skills and a deep understanding of security frameworks and standards.
  • Strong communication and cross-functional and team leading abilities.
  • Commitment to continuous learning and staying updated on security trends and threats.

We are looking for an experienced API developer using Apigee to join our team. The ideal candidate will be responsible for designing, developing, and maintaining APIs that enable communication between various software applications. The API developer will work closely with cross-functional teams to understand business requirements and translate them into API specifications.

Key Responsibilities:


  • Design, develop, and maintain APIs using Apigee
  • Collaborate with cross-functional teams to understand business requirements and translate them into API specifications
  • Implement security measures to protect APIs from unauthorized access
  • Test and debug APIs to ensure they are working correctly
  • Monitor and troubleshoot production APIs to ensure they are meeting SLAs
  • Continuously improve API performance and scalability
  • Document APIs and provide support to API users

Requirements:


  • Bachelor's degree in Computer Science, Software Engineering or related field
  • Minimum of 3 years of experience in designing and developing APIs using Apigee
  • Strong knowledge of RESTful API design principles
  • Experience with API security and authentication protocols (OAuth2, JWT, etc.)
  • Understanding of API management tools and concepts (API Gateway, API Proxy, etc.)
  • Familiarity with API monitoring and analytics tools (Apigee Analytics, Splunk, etc.)
  • Excellent problem-solving and analytical skills
  • Strong communication and collaboration skills

As a Senior DevOps Engineer with a strong emphasis on DevOps practices, you will lead the charge in optimizing our software development and operations processes. Your primary objective is to elevate our DevOps capabilities by focusing on key areas such as builds – the pipeline, automation, and code quality, all while incorporating vital security components. An Ideal candidate will possess a deep understanding of security from a DevOps perspective. The person must be able to identify gaps, bring the DevSecOps framework to the next maturity level while seeing the bigger picture by effective use of automation, coordinating with cross functional team and enabling the industry best practices within the team.

Key Responsibilities:


  • Spearhead the design, implementation, and maintenance of efficient CI/CD pipelines and automation frameworks, with a foundation in DevOps best practices.
  • Identify areas for improvement and implement automation solutions to streamline critical processes.
  • Collaborate closely with development and operations teams to seamlessly integrate security into the software development lifecycle.
  • Champion code quality and sanity checks, emphasizing early detection and remediation of issues.
  • Incorporate security tools such as Sonarqube, Data Theorem, Veracode, and BurpSuite into the DevOps workflow to enhance code security without compromising agility.
  • Actively monitor and respond to security-related concerns within the DevOps pipeline.
  • Foster a security-conscious culture by providing guidance and promoting security awareness among development and operations teams.
  • Stay informed about evolving security practices within the DevOps landscape.

Qualifications:


  • Bachelor’s degree in computer science or a related field (Master's preferred).
  • 6+ years of experience as a DevOps Engineer with a strong focus on DevOps fundamentals.
  • Proficiency in scripting and automation, including Python, Shell scripting, and DevOps toolsets.
  • Hands-on experience with DevOps tools like Docker, Kubernetes, Jenkins, and GitLab CI/CD, with an emphasis on code quality, automation, and efficiency.
  • A solid understanding of security concepts from a DevOps perspective, including secure coding practices.
  • Familiarity with security tools and practices, such as Sonarqube, Data Theorem, Veracode, and BurpSuite, as integral parts of the DevOps workflow.
  • Exceptional problem-solving skills and meticulous attention to detail.
  • Strong communication and collaboration abilities.
  • A commitment to staying current with evolving DevOps and security trends and technologies.

In this role, you will take the lead in enhancing our DevOps capabilities, ensuring that our software development processes are efficient, secure, and optimized for quality. By balancing DevOps excellence with a security-conscious mindset, you will help us deliver reliable and secure software solutions.

Minimum Qualifications:


  • BE/BS, with 10+ years of experience or ME/MS with 9+ years of experience in software development
  • Sound knowledge of Database Design patterns, OOPS concept, RDBMS concept
  • Should have strong and fully hands-on experience on Maria DB
  • Optimization of queries, performance tuning
  • Ability to debug DB core and analyse locks
  • Administration of Maria DB, MySQL databases on Unix
  • Excellent troubleshooting skills
  • Install, Upgrade and Patching of databases
  • Understanding of backup and restore
  • Good understanding of cloud RDS
  • Commitment to quality and high standards
  • Excellent communication skills

We are looking for a self-motivated, client-facing and knowledgeable Incident Managers who embraces challenges supporting a Global Customer Base.

Roles & Responsibilities:


  • Have the ability to decipher technical information on fault tickets and route tickets to L2 wherever applicable within defined KPI measures
  • Be able to record the issues received and classify them as Incidents
  • Undertake an immediate action in order to restore a failed Service as quickly as possible
  • Provide L2/L3 level support to customers in isolating, diagnosing, reproducing and solving technical issues in a timely manner.
  • Understand and manage client expectations to ensure strong client service and satisfaction
  • Communicate with the customer on the status and provide regular progress updates by
    • Attending Incident resolution calls with clients and internal stakeholders
    • Representing the team on routine Incident update calls with clients
    • Managing clients on live calls
  • Work with respective support teams for Incident investigation, diagnosis and resolution.
  • Determine if an incident needs to be escalated according to priority and severity of issue
  • Participate in Incident review following major Incidents
  • Draft and prepare final RCAs for Sev-1s and Sev-2s
  • Facilitate and support lessons learned reviews and track RCA and remediation items.
  • Monitor the Incidents and manage workload in their respective queues to ensure that client's SLAs and internal OLAs are respected
  • Actively contribute to the Knowledge Base and other organization driven initiatives.
  • Have a level-headed customer-first approach and be passionate about solving customer issues.
  • Have a strong aptitude for learning new technologies and understanding how to utilize them in a customer facing environment.
  • Able to work independently, responding to customer issues and driving them to resolution with minimal supervision
  • Prepare incident reports and trackers ( Internal/ External / Ad hoc)
  • Work on real-time notifications for incident communication to clients

Critical Skills:


Communication

  • Ability to communicate confidently and effectively at all levels- Both verbally and in written, with clients and within org.
  • Logical approach to problem solving
  • Adhere to predefined Incident Management Process
  • Should be able to distinguish between different severities and priorities as per defined definitions and act accordingly
  • Should be able to recognize incidents eligible to be escalated to Engineering and Dev teams

Technical

Should be able to understand and collect the prerequisites helpful for primary troubleshooting of the issue reported.

    Mandatory Requirements

  • In depth knowledge of Linux and troubleshooting skills
  • Hands on experience on Telco Call flows (MO & MT) / VoIP and Telecom networking protocol
  • Strong previous experience with various telecom technologies, including knowledge of hosted and SIP technologies - SIP Error Codes/Methods
  • Experience on SMSC, USSD, VAS, SDP (Service Delivery Platform)
  • SDP / RDP knowledge
  • SS7 protocol & concepts
  • Knowledge of Wireshark, SMS and TCPdump
  • Networking Concepts

  • Desired

  • Hands on experience with MySQL or PgSQL, Mongo DB
  • Knowledge of CLOUD based operations.

Key Result Areas:


  • Quality of handling incidents (communication, coordination, adherence to process)
  • Percentage reduction in the outage time of Incidents affecting service to Customers / %age reduction in SLAs
  • Number of Incident resolved through the Incident Management process basis ITIL standards Behavioral aspects

Preferred / Required qualifications:


  • Bachelor’s degree in Engineering / Science
  • ITIL Certification (Desired)
  • Experience in defining criteria for Incident Management

The ideal candidate for this role will have strong Core Java skills and a solid understanding of building, testing and deploying microservices. Primary responsibilities will include design, develop and test globally deployed cloud based microservices solutions with high availability. Be responsible for current software development practices and principles to identify and implement process improvements. Work with micro-services teams on RESTful API designs - assist with future scripted API's and websocket investigations.

Key Skills:


  • Mandatory - Java 8 - 11, RDBMS concepts, Spring/SpringBoot, Rest APIs, Web Sockets, SOA, Microservices, AWS EC2, Lambda, S3, Docker containers
  • Nice to have - React JS, Redux, CSS or Angular JS

Technical Competency:


  • Strong Core JAVA Skills, Microservices, Rest API, Spring, Hibernate, Web sockets, CICD.
  • Solid experience with SQL/NoSQL and cloud-based technologies.
  • Good experience with manual and automation testing with tools like Selenium.
  • Ability to work independently or as part of a larger global development team in agile.
  • Willingness to learn new technologies and demonstrate commitment to excellence for the continuous improvement of our products, code base, processes, and tools.
  • Use of test management and bug management tools like Zephyr and JIRA.
  • Strong knowledge on CI tools like Jenkins to align with Dev-Ops Deployment process.
  • Experience with Agile or Scaled Agile Framework (SAFe) work environments
  • Understanding and experience of BDD/TDD strengths and weaknesses is a plus.
  • Experience and Interest in Javascript, Html and CSS will be an advantage and bonus.

Prefered Skills:


  • Designing a modern highly responsive web-based user interface
  • Strong and hands-on experience in React JS, Redux, HTML, modular CSS, JavaScript, libraries, components.
  • Familiarity working with REST APIs for deep integrations with platforms
  • Experience with automated testing suites, like Jest or Mocha
  • Should understand principles of mobile development
  • Should work closely with our product, design, and UX teams to create amazing and intuitive experiences that make it effortless to connect different apps together.

It's Hybrid model - 2 or 3 days Work from the office in a week

Key Skills:


  • C/C++
  • Linux
  • opps
  • Data structures
  • Multithreading
  • Networking
  • Telecom

Job duties:


  • Develop, evangelise, and enforce enterprise data standards across TVS group companies.
  • Liaise with different stakeholders in converting the data management strategy and policies to detailed standards, guidelines, and design patterns for managing data across its lifecycle.
  • Take ownership of building required checklists enforcing the data standards across SDLC.
  • Document various design patterns around data modeling, data storage, security, movement, integration, retention, transformations, consumption, and purge covering different use cases (BPM applications, OLAP applications, data engineering, master data management, real-time use cases, data migrations, etc.)
  • Be a go-to resource for data management and provide advice to application development teams on an as-needed basis.
  • Create enterprise data models (conceptual, and logical) for consumption across the enterprise.

Skills and Experience:


  • 10-15 years of experience on data projects (data modeling, data architecture, data warehousing, data lakes, data engineering, business intelligence, etc.)
  • 7+ years of relevant experience as an enterprise data architect.
  • Must have provided data architecture to complex data projects covering entire data lifecycle.
  • Strong understanding of multi-domain MDM design patterns.
  • Extensive consulting background is a plus.
  • Must have experience in creating high-impact, detailed documents and presentations for consumption by (presenting to) audience across different levels.
  • Must have experience in cloud-based data architectures.
  • Good understanding of engineering processes like Release Management, engineering, and enterprise BOM Management/Change Management
  • Certification in enterprise architecture frameworks such as TOGAF, Zachman is a plus.
  • Experience in creating architecture document, detailed design, recommendations, etc. with a top-down approach.
  • Experience in manufacturing or automotive domain is a plus.
  • Must have strong written and verbal communication skills.

Job duties:


  • Understand the current master data sources and application, related data quality issues, and data consumption issues and provide a scalable MDM solution architecture
  • Lead the technical implementation of a MDM tool and ensure that a single source of truth is available for data consumers
  • Profile the data from different master data sources, collaborate with different business and IT users to understand the current-state pain points and establish implementation success criteria
  • Provide daily status reports to supervisor
  • Develop a master data model that will cater to all data consumers
  • Enrich the master data with other data to add more context to the master data
  • Provide high-level and low-level design on mast data pipeline (sourcing, cleansing, standardizing, matching, merging, exception workflow, master data publishing, etc.)
  • Help data project manager in mitigating the risks in MDM implementation
  • Design a report on MDM operational metrics

Skills and Experience:


  • Batchelors degree in any discipline
  • 8+ years of experience developing and deploying MDM solutions using different implementation styles
  • At least 5 years of experience as an MDM Architect
  • Strong understanding of multi-domain MDM design patterns.
  • Experience in automotive industry
  • Hands-on experience in implementing any of the popular MDM tools (Informatica, Ataccama, Profisee, Syndigo, Talend, etc.)
  • Experience training the data stewards on responding to the MDM workflow exceptions
  • Ability to work in an agile, dynamic environment
  • Self-starter with an ability to work with minimal or no direction
  • Good verbal and written communication skills, and presentation skills

Job duties:


  • Connect to different data sources and harvest technical metadata (data at rest and data in motion).
  • Coordinate with Business and IT stakeholders and enrich the data catalog with additional metadata (business terms, attribute descriptions, data classifications, data domain, data owner, data steward, process, system, regulation, policy, etc.)
  • Validate the metadata with relevant stakeholders before publishing for production use.
  • Build data quality rules for critical data elements and validate them with relevant stakeholders.
  • Train data stewards and other business, IT stakeholders on how to use the metadata management tool
  • Create a standard process to gather business and technical metadata and train stakeholders
  • Provide weekly status reports on the metadata harvesting and enrichment progress to relevant stakeholders.

Skills and Experience:


  • 6-10 years of overall experience working on any data related products.
  • 4 years of relevant experience in data governance projects (more specifically related to data cataloging, and metadata management).
  • Must have experience using any popular data cataloging and metadata management tool to harvest and enrich metadata
  • Data quality experience (data quality rules configuration, data profiling, scorecarding, etc.) is a big plus.
  • Good verbal and written communication skills.
  • Must be a self-starter and be able to work independently.
  • Must be customer-facing.
  • Must be hands-on.

Job duties:


  • Design ML and DL algorithms based on the requirement.
  • Assist with building a feature store for improving reusability across the enterprise.
  • Assist with building an end-to-end MLOps framework and a repeatable process preferably on Azure platform for all data science and AI teams to use.
  • Demonstrate the use of feature store and MLOps pipeline with the help of two to three machine learning and deep learning use cases.
  • Build, test and demonstrate a repeatable process with custom or external tools for unstructured data annotation.
  • Create documentation and training material on the frameworks and processes developed.

Skills and Experience:


  • 7-10 years of overall experience in data science and machine learning.
  • 4+ years of building ML models, engineering and building automated processes to test, operationalize, and monitor the models.
  • Experience in building and building a feature store/registry is must.
  • Experience in unstructured data annotation is required.
  • Experience in Databricks platform with Unity catalog for machine learning development is a must.
  • Experience in Azure ML tool stack is a plus.
  • Hands-on and expert-level experience in using frameworks such as TensorFlow, Keras, Scikit-Learn, PyTorch, etc.
  • Good verbal and written communication skills.
  • Must be a self-starter and be able to work independently.
  • Must be customer-facing.
  • Must be hands-on.

Responsibilities:


  • Collaborate with clients and functional consultants to gather business requirements and translate them into technical specifications for Oracle Extensions VBCS solutions.
  • Design and develop custom extensions and integrations using Oracle VBCS, leveraging visual development tools, JavaScript, HTML, CSS, and other relevant technologies.
  • Customize and extend Oracle applications, modules, and workflows using VBCS to meet specific business needs.
  • Develop data models, create REST APIs, and configure integrations to connect Oracle VBCS with other systems and databases.
  • Perform testing, debugging, and troubleshooting to ensure the quality, performance, and security of Oracle Extensions VBCS solutions.
  • Collaborate with cross-functional teams to ensure smooth integration with existing systems and data sources.
  • Provide technical guidance and expertise to clients and project teams throughout the implementation lifestyle.
  • Stay updated with the latest Oracle VBCS features, enhancements, and best practices for rapid application development.
  • Document technical specifications, configurations, and procedures related to Oracle Extensions VBCS solutions.
  • Assist in the migration and deployment of Oracle Extensions VBCS applications to production environments

Mandatory :


  • Experience in developing data models using DB/working with REST and SOAP Web services
  • Basic knowledge on JavaScript, HTML, CSS
  • VBCS custom page development experience

Responsibilities:


  • As a technical team member work with the Offshore / US project team / client on engagements focussed on Oracle EBS / Cloud application. Understand customer business processes / Functional Specification and be able to prepare technical design. Provide Technical support for Oracle Cloud integration, data conversion and reports. Develop and unit test technical components as per PwC standards. Participate in automation and digital transformation activities within or outside the client projects. Desired Knowledge : Good knowledge of modules and processes around Oracle Finance/SCM application. End to end implementation experience in oracle cloud / ERP applications. Excellent communication skills and ability to interact with external teams or clients. Experience in working with client / USA counterparts in understanding their business requirements and providing the right solutions

Must have skills:


  • Candidate should possess strong knowledge in below 3 or more areas: SQL and Pl/SQL Data migration using FBDI Oracle SaaS BI / OTBI / FR reports Cloud Integration ( ICS / OIC) Oracle VBCS / APEX

Good to have skills:


  • Knowledge on emerging technologies like ( RPA, IoT, Blockchain)

As a technical team member work with the Offshore / US project team / client on engagements focussed on Oracle EBS / Cloud application.

Desired Knowledge :


  • Good knowledge of modules and processes around Oracle Finance/SCM application.
  • End to end implementation experience in oracle cloud / ERP applications.
  • Excellent communication skills and ability to interact with external teams or clients.
  • Experience in working with client / USA counterparts in understanding their business requirements and providing the right solutions.

Must have skills:


  • Candidate should possess strong knowledge in below 3 or more areas:
  • SQL and Pl/SQL
  • Data migration using FBDI
  • Oracle SaaS BI / OTBI / FR reports
  • Cloud Integration ( ICS / OIC)
  • Oracle VBCS / APEX

Responsibilities:


  • As a conversion lead work with the Offshore / US project team / client on engagements focused on Cloud application and able to drive different teams towards data conversion schedule defined in project plan. Understand customer business processes / Functional Specification in the area of Finance data conversions and be able to prepare technical design. Provide Technical support for data conversion, develop and unit test technical components as per PwC standards. Desired Knowledge: Good knowledge of modules and processes around Oracle Financials modules and Oracle Supply chain modules. End to end implementation experience in oracle cloud / ERP applications. Understand conversions end to end process and steps. Excellent communication skills and ability to interact with external teams or clients. Experience in working with client / USA counterparts in understanding their business requirements

Must have skills:


  • Candidate should possess strong knowledge in below 3 areas: SQL and Pl/SQL Data migration using FBDI, ADFDI, REST and SOAP APIs understanding in SaaS

Good to have skills:


  • Alteryx, Jira and Talend

Job Requirements:


  • Candidate should have min 4+ years of exp in Testing.
  • 3+ years in C# Automation
  • 2+ Years in Specflow, BDD.

Job Requirements:


  • Good communication skill
  • Only from Contract staffing background
  • Client handling experience is required
  • Should be Good in Sourcing, Screening
  • Ability to align recruitment strategies with overall business goals.
  • Strong networking skills to build and maintain relationships with candidates, hiring managers, and industry professionals.
  • Proficiency in using advanced sourcing methods, including social media, professional networks
  • Strong negotiation skills for salary discussions, offer acceptance, and other terms.

Job Requirements:


  • Good communication skill
  • Only from Contract staffing background
  • Should be Good in Sourcing, Screening
  • Ability to align recruitment strategies with overall business goals.
  • Strong networking skills to build and maintain relationships with candidates, hiring managers, and industry professionals.
  • Proficiency in using advanced sourcing methods, including social media, professional networks
  • Strong negotiation skills for salary discussions, offer acceptance, and other terms.

About You :

We’re looking for high achieving full-time Staff AI Engineer to join our engineering team. Someone who has interest in and a good understanding of devops and wants to help design, implement, launch, and scale major AI/ML systems and user-facing features. You're comfortable working in a fast-paced environment with a small and talented team where you're supported in your efforts to grow professionally. You're able to manage your time well, communicate effectively, and collaborate in a fully distributed team. Our backend tech stack currently consists primarily of Python Flask web apps. Our data stores include MongoDB, Neo4J and Redis. We utilize AWS EventBridge for our integrations and design loosely coupled applications. Our stack includes GPT integrations and data processing pipelines supporting generative ai applications. The underlying infrastructure runs on AWS using a combination of managed services like EKS and non-managed services running on EC2 instances. All of our compute runs through CI/CD pipelines that build Docker images, run automated tests, and deploy to our clusters in AWS. Our backend primarily serves a well-documented API that our front-end Salesforce App and Web app consumes. Our infrastructure is automated using Terraform and other AWS tools.

Key Responsibilities:


  • Conceive, design, build, and launch intelligent features which will drive social impact in the world for the causes we all care about.
  • Design and implement data integration, acquisition, cleansing, harmonization, and transformation processes to create curated high-quality datasets for data science, data discovery for the usage of vector stores/embeddings.
  • Develop and maintain scalable data processing pipelines and systems to support Generative AI applications
  • Monitor and optimize the performance and manage the costs of data processing pipelines and systems
  • Monitor technology trends and advancements in Generative AI and incorporate them to continuously innovate
  • Collaborate with Solution Architects, Data Scientists, Software Engineers and DevOps Engineers, Product Owners, researchers and business stakeholders on the cross-functional team and across teams to understand business needs, derive technical requirements and ensure data availability, quality and responsible data use adhering to security, privacy and compliance requirements
  • Continuously improve data acquisition, preparation, transformation, and publishing processes to meet business needs

Requirements:


  • Bachelor’s Degree in Computer Science or similar discipline.
  • 7+ years of experience in data engineering, with 6 months of hands-on experience in Generative AI technologies like vector databases
  • Hands-on experiences on text processing tools (e.g. spaCy, NLTK, Word2vec, pyTorch)
  • Strong knowledge of network security principles and best practices.
  • Experience with security models and development on large data sets
  • Experience with at least one cloud platform like Azure, AWS, GCP
  • Proficiency with data engineering technologies and tools (e.g. Hadoop, Airflow, Pandas, etc.)
  • Hands-on experience with monitoring, management, scalability and automation of data processing pipelines and systems
  • Hands-on experience in software development with one major programming language (e.g. Python)
  • Excellent communication skills, with the ability to explain complex concepts in simple language
  • Hands-on experience in machine learning, especially deep neural networks and Generative AI models
  • Understanding the difference, advantages, and disadvantages between the most common large language models (GPT, Llama, etc.)
  • Experience in developing end-to-end production-grade solutions with cloud and AI technology
  • Understanding of best practices for inclusion and maintenance of document stores in retrieval augmented generation strategies
  • Hands-on experiences on information retrieval tools (e.g. ChromaDB, Pinecone, Pgvector, Elasticsearch or other vector stores)
  • Passion for learning, innovation and staying current with industry trends in AI and technology
  • Good overview of re-usable frameworks and tools in the field of Generative AI (both commercial and open-source)
  • Comfortable solving ambiguous problems and adapting to a dynamic environment
  • Relentless with best practices and willing to discuss the choices you make with your fellow engineers and manager.

Bonus points if you have:


  • Contributed open source code related to our tech stack.
  • Led small project teams building and launching features.
  • Built B2B SaaS products.
  • Worked with complex architectures that support multiple APIs (e.g. REST, GQL, WebSockets) as well as async task and event processing frameworks.

Benefits:


  • Competitive wages.
  • 10-days of accrued paid vacation that increases with tenure.
  • 8-days of paid sick leave annually.
  • 10-additional paid company holidays.
  • Medical, dental, vision & life insurance options.

Salary Offer:


    We determine your level based on interview performance and make an offer based on geo-located salary bands. During the hiring process, we review the base salary, benefits, and number of options. Please keep in mind that any equity portion of an offer is not included in these numbers and can represent a significant part of your total compensation.

Job duties :

  • Build an automotive customer/prospect database with key contact information.
  • Collaborate with different data providers and ensure that good quality data feeds are established.
  • Liaise with Data Engineering teams in ensuring that the data coming from third parties is integrated and automated to continuously update the prospect database.
  • Liaise with Marketing teams to conduct roadshows and events if required.
  • Gather prospect data from Marketing teams on the events and roadshows they conducted.
  • Establish data quality checks in place to ensure the prospect data is of 100% good quality.
  • Provide daily status reports to the supervisor.

Skills and Experience:


  • 5+ years of experience in building prospect databases.
  • Experience in Marketing products, persuading prospects to provide information and potentially buy products.
  • Must have good verbal and written communication skills.
  • Must be a self-starter and go-getter.
  • Must be able to work independently with very minimal direction and hand-holding.
  • Must be data-savvy.
  • Must have flair for meeting new people and building connections.

Job duties :

  • Gather the requirements for a Customer master data workflow, design a solution, develop, test, and deploy.
  • Code Azure Databricks/Spark scripts to match and merge duplicate customer data based on predefined matching rules (deterministic and fuzzy logic).
  • Develop a user interface and a configurable workflow to review the potential customer records that could be reviewed and merged manually.
  • Provide daily status reports to supervisor.

Skills and Experience:


  • 7+ years of experience in Java full-stack development.
  • 2+ years of experience in any workflow development.
  • 4+ years of experience in Azure Databricks/Spark development.
  • Must have good verbal and written communication skills.
  • Must be a self-starter and go-getter.
  • Must be able to work independently with very minimal direction and hand-holding
  • Must be data-savvy.

Select an option below to learn more about our end-to-end offering


Navigate through our various services