Phenom is actively seeking candidates for the position of Data Analyst in Hyderabad. To qualify for this role, applicants should hold a Bachelor’s degree in Computer Science, Information Technology, or a related field. Phenom is particularly open to freshers, providing an exciting opportunity for recent graduates to embark on a career in data analysis. The chosen candidate will be based in the dynamic city of Hyderabad, renowned for its thriving tech industry. Joining Phenom as a Data Analyst in Hyderabad not only allows individuals to apply their academic knowledge in a practical setting but also offers a chance to grow and thrive in the evolving field of data analysis.
Company Name: Phenom
Job Role: Data Analyst
Education Required: Bachelor’s degree in computer science, Information Technology, or related field
Experience Required: Freshers
Job Location: Hyderabad
Role and Responsibilities:
- Use Python, SQL and R programming to analyze, hypothesize, and solve complex business challenges, as well as to identify and visualize novel opportunities for growth.
- Should be willing to work on “Looker”.
- Extract and report data using database queries with a preference for cloud experience, pipeline automation, and scalable analytics.
- Use statistical modeling and prediction techniques to drive actionable insights.
- Demonstrate curiosity and a problem-solving, research mindset.
- Work independently with minimal supervision and communicate your findings to technical and non-technical audiences.
Required Skills and Qualification:
- Cloud platforms: Familiarity with cloud platforms for data analysis and orchestration like Snowflake, AWS Sagemaker, AWS Lambda, Airflow, Jenkins, Postman.
- Machine Learning: Proficiency with fundamental machine learning algorithms such as linear regression, logistic regression, decision trees, random forests, and neural networks. Ability to use ML and statistical modeling to solve business problems.
- Feature Selection and Dimensionality Reduction: Familiarity with techniques like feature selection, feature engineering, regularization, and dimensionality reduction (e.g., PCA) to improve model efficiency, reduce overfitting, and enhance interpretability.
- Model Deployment: Experience in deploying machine learning models in production environments, utilizing frameworks like Flask or Django. Understanding of model serving, API development, and cloud deployment to enable real-time predictions.
- Dashboarding: Familiarity with BI and dashboarding software such as Superset, Looker and Tableau.
- Candidates with experience in the credit card, banking, or healthcare domains and/or strong mathematical and statistical backgrounds will be preferred.