Openings in the Crayon box

Join our fab team of cool people

Why Crayon Data

Crayon Data is a fast-growing big data company with a vision to simplify the world’s choices.
Our clients are top-tier enterprises in the banking and hospitality space. They use our choice engine Maya, to deliver digital, personal experiences centered around taste. We are a company with strong values and culture. And are champions of diversity. We have offices in Chennai and Singapore. Our client projects are currently in the US, London, UAE, India and SE Asia. We have entered the hyper growth phase. We need as many brilliant minds we can get, to join us on this exciting journey. If you are ready to get your hands dirty, while making lives simpler - talk to us!

Senior Data Miner

We are looking for data miners to design various data analytics services. Communicate with and present complex concepts to business audiences.

What you will do
  • Design algorithms for product development and build analytics-based product
  • Coordinate individual teams to fulfil client requirements and manage deliverables
  • Lead analytical projects and deliver value to customers
  • Manage and strategize business from an analytics point of view
Requirements
  • Understand advanced analytics techniques, including Predictive Modelling (Logistic regression), segmentation, forecasting, data mining, and optimizations
  • Be proficient in software packages such as SAS, R, Rapidminer for analytical modelling and data management. Experience in using Business Intelligence tools such as SAS, Microsoft, Tableau for business applications
  • Aptitude for analytical problem solving
  • Communicate and present complex concepts to business audiences
  • Capable of working effectively in a global team
  • You should have a minimum of 8 to 10 years of experience.
  • The position is based in Chennai, but requires travel to client location, in either Myanmar or Dubai
Apply Now



Data Scientist

We are looking for Data Scientists to work on our global TasteGraphTM, to solve ambitious and complex problems. And drive customer engagement for marquee clients around the world.

What you will do
  • Translate business requirements to a set of analytical models
  • Perform data analysis (with a representative sample data slice) and build/prototype model(s)
  • Work with business users and/or data scientists to formulate model designs using large data sets
  • Provide inputs to the data ingestion/engineering teams on data requirements for model(s), in terms of size, format, associations and cleansing
  • Identify/provide approaches and data to validate the model(s)
  • Collaborate with the data engineering team to transfer business understanding, and get model(s) productised
  • Validate output along with business users
  • Tune model(s) to improve results provided over time
  • Understand business challenges and goals of clients to formulate approaches for data analysis and model creation to support their business decision making
  • Perform hands-on data analysis and model creation
  • Work in highly collaborative teams that strive to build quality systems and provide business value
  • Mentor junior team members
Requirements
  • Understand business problems and address them by leveraging data, characterized by high volume and dimensionality, from multiple sources
  • Communicate complex models and analysis in a clear and precise manner
  • Build predictive statistical, behavioural or other models via supervised and unsupervised machine learning, statistical analysis, and other predictive modelling techniques.
  • Display a strong understanding of various types of recommender systems, like collaborative filtering, content-based filtering, association rule mining, etc.
  • Understand unstructured (text) data processing and NLP
  • Experience with matrices, distributions and probability
  • Hands-on experience in Java or Scala.
  • Demonstrate technical know-how on R, SAS, Matlab or equivalent statistical/data analysis tools, and possess the ability to transfer that knowledge to different
  • Proficient with relational databases, natural language processing and at least one scripting language – preferably Python/Ruby
  • Have a working knowledge of the big data tech stack including Hadoop, Spark and NoSQL databases like Couchbase, HBase, Solr, etc.
  • Have previous exposure to DevOps, containers like Docker, and cloud environments like AWS, Azure, etc
  • You need to have 5 to 10 years of experience in a similar, relevant role. You need to also have worked in the big data space before, alongside a big data engineering team (data visualization team and data and business analysts)
  • The position is based in Chennai but may require domestic and international travel.
Apply Now



Solutions Architect

We are looking for solutions architects to work with recommendation systems. And help touch the lives of millions of users around the world.

What you will do
  • Lead a team of data analysts and engineers to build big data solutions that align with customers’ needs
  • Oversee end-to-end system design and architecture of solution infrastructure that will work on millions of data points
  • Travel to on-site locations and meet with customers to deliver the project
Requirements
  • Excellent analytical and problem-solving skills
  • Expert knowledge in at least one programming stack - preferably Java
  • Working knowledge of big data technologies like Hadoop, Spark etc.
  • Demonstrate expertise in cloud-based deployments in AWS, Azure or GCP
  • Be proficient in integrating with enterprise middleware and exposure to ESB, SOA etc.
  • Understand the basics of Machine Learning, NLP, IR, Algorithms etc.
  • You need to have a minimum of 8 years of experience working with banking & airlines verticals.
  • The position is based in Chennai but may require domestic and international travel.
Apply Now



Senior Software Engineer

We are looking for senior software engineers to lead a team of talented individuals, to develop world-class products.

What you will do
  • Lead and mentor junior engineers
  • Write clean, scalable and testable code which will run on large Hadoop and Spark clusters
  • Troubleshoot, test and maintain the core product(s) and configurations to ensure strong optimization and functionality
  • Contribute to all phases of the product development lifecycle
Requirements
  • Follow industry best practices
  • Excellent analytical and problem-solving skills.
  • Expert knowledge in at least one programming stack – preferably Java
  • Have technical know-how of big data technologies like Hadoop, Spark etc.
  • Proficient in cloud-based deployments in AWS, Azure or GCP
  • Understand Machine Learning, NLP, IR, Algorithms etc.
  • Be comfortable working with Linux and Shell
  • Should be able to thrive in a fast-paced, quickly evolving, tech start-up environment
  • You should have a minimum of 6 years of experience.
Apply Now



Software Engineer

We are looking for software engineers to build world-class products. As well as provide technical support to clients.

What you will do
  • Write clean, scalable and testable code to be run on large Hadoop and Spark clusters
  • Contribute to design and architecture of the product(s)
  • Participate in maintenance of the core product(s) and support customers
Requirements
  • Excellent analytical and problem-solving skills.
  • Contribute to a collaborative and dynamic team which works across time zones
  • Working knowledge of at least one programming stack - preferably Java
  • Have technical know-how in at least one database, such as RDBMS or NoSQL
  • Proficient with big data technologies like Hadoop, Spark etc.
  • Understand Machine Learning, NLP, IR, Algorithms etc.
  • Demonstrate deployment capabilities of big data solutions in one or more clouds like AWS, Azure or GCP
  • Be comfortable working with Linux and Shell
  • Be comfortable working with Git or any versioning system
  • Should be able to thrive in a fast-paced, quickly evolving, tech start-up environment
  • You should have a minimum of 3 years of experience.
Apply Now



Data Engineer

We are looking for data engineers to build and monitor high quality infrastructure to analyse data.

What you will do
  • Write clean, scalable and testable code to be run on large Hadoop and Spark clusters
  • Assemble large, complex data sets that meet functional/non-functional business requirements
  • Build the infrastructure required for optimal ETL of data from a wide variety of data sources using SQL and big data technologies
  • Build analytics tools that utilize the data pipeline to provide actionable insights for analytics and data scientist team members.
  • Monitor performance and continuously improve the infrastructure
Requirements
  • xcellent analytical and problem-solving skills.
  • Follow industry best practices
  • Technical know-how of at least one programming stack - ideally Java
  • Showcase expertise in data warehousing, relational database architectures (Oracle, SQL, DB2, Teradata), and big data storage and processing platforms required. (Hadoop, HBASE, Hive, Spark)
  • Have working knowledge in cloud-based deployments in AWS, Azure or GCP
  • Understand Machine Learning, NLP, IR, Algorithms etc.
  • Be comfortable working with Linux and Shell
  • Should be able to thrive in a fast-paced, quickly evolving, tech start-up environment
  • You should also have a minimum of 6 years of experience.
Apply Now

Can't find your colour?

Send us your CV, and we'll get in touch when a suitable opening pops up.