Pacotech tuyển dụng Kỹ sư Cloud Architect (Big Data) làm việc tại HCM & Hà Nội

Tuyen dung Cloud Architect

Pacotech tuyển dụng Kỹ sư Cloud Architect (Big Data) làm việc tại TP. HCM. Nhằm đáp ứng nhu cầu nhân sự cho các dự án outsource cho khách hàng nước ngoài, Pacotech tuyển dụng các vị trí:

Job Title: Cloud Architect (Big Data)

Location: Ho Chi Minh & Ha Noi

Job Descriptions

  • Accountable to deliver proof-of-concept projects, topical workshops, and leading implementation projects.
  • Focus on key customer solutions such as web applications, enterprise applications, data warehouse migration, big data, archiving, and disaster recovery
  • Focus on all aspects of data and information (both structured and unstructured) and support the development of enterprise systems from business requirements to the logical architecture
  • Spans the full information management life-cycle from acquisition, cleansing, data modeling, transformation, and storage to presentation, distribution, security, privacy, and archiving
  • Guiding project teams and data-centric organizations on the best solutions to use enterprise architecture capabilities to deliver higher value at a faster pace to the business.
  • Helping Project Managers in identifying key data, and information risks, mitigation plans, effort estimation, and planning.
  • Using domain expertise to influence future capabilities in architecture frameworks for the information architecture domain.
  • Collaborating with Lead Architects across the other architectural domains – namely, Business, Application, Technology, and Security to ensure alignment of strategies.
  • Developing standards, domain principles, and the best ways for creating and maintaining architecture artifacts (including inventories and models). This includes articulating the value of the artifacts.
  • Hosting Peer Reviews to assess the status and compliance with architecture standards.
  • Ensuring sufficiency of architecture requirements for top projects.
  • Be accountable for the development of the conceptual, logical, and physical data models, the usage of RDBMS, operational data store (ODS), data marts, and data lakes on target Cloud platforms Azure/GCP/AWS PaaS (SQL/NoSQL).
  • Be accountable for and govern the expansion of existing data architecture and the optimization of data query performance via the best solutions. The person should have the ability to work both independently and cross-functionally.

Requirement

  • Strong analytical and problem-solving skills and has an ability to translate ambiguity and incomplete information into insights and impactful items understood by Business and IT leaders. They need to balance the need to operate the business today while understanding and influencing the business and data technologies of the future. The person requires curiosity to engage all business units and functions of the company while partnering with external organizations on the best approaches and solution delivery.
  • Overall 10+ years of experience in the IT industry
  • Graduate (Any Degree)
  • More than 5 years of demonstrated ability with normalized and Dimensional modeling techniques, Star & Snowflake schemas, modeling slowly changing dimensions, dimensional hierarchies, and data classification. Ideally at enterprise-scale as well as at organizational level.
  • 5+ years of experience with high-scale /distributed RDBMS.
  • Expertise in Data Quality, Data Profiling, Data Governance, Data Security, Metadata Management, MDM, Data Archival, and Data Migration strategies using appropriate tools.
  • Ability to define and govern data modeling and design standards, tools, the best approaches, and related development for enterprise data models.
  • Hands-on data modeling in areas of Canonical, Semantic, Logical & Physical data models, design, schema synchronization, and performance tuning.
  • Demonstrated ability to succeed in a fast-paced and changing environment with interruptions, and multiple tasks/projects occurring simultaneously
  • Ability to work independently and have skills in planning, strategy, estimation, scheduling, and project management.
  • Strong problem solving, influencing, communication, and presentation skills, self-starter
  • Ability to define reusable components, frameworks, common schemas, standards & tools to be used
  • Mentor and guide Technical Leads & Software Engineer (both internal and external)

Hands-on experience in:

  • Hands-on experience in Data modeling tools (e.g. Erwin data modeler, Archimate, E/R studio, DB Schema, etc) ETL (Extract-Transform-Load) tools (e.g. Informatica, Google Dataflow, Azure Data Factory, Talend, etc.) BI tools and reporting software (e.g. Tableau, PowerBI, MicroStrategy, etc.)
  • Either AWS technologies: EC2, ECS, Lambda, S3, EBS, EFS, Redshift, RDS, DynamoDB, VPC, CloudWatch, CloudFormation, Cloud-Trail, OpsWorks, IAM, Directory Service, Ansible

Or MS Azure: Security Center, Azure Active Directory (Core, Developer, B2C, Services), Key Vault, understanding of securing PaaS solutions like SQL data warehouse. ADF, SQL DB, Azure App service, etc.

Or GCP: Google app stack and Google Big Data stacks.

  • Hands-on experience (preferred) or familiarity in any of the CD/CI tools – Github, Ansible Jenkins, spinnaker, and Istio.
  • Experience in micro-service based architecture or awareness is desirable
  • Experience in data domain modeling, data designing, tools – Archimate or Erwin, etc

Knowledge of the following:

  • Cloud computing infrastructure (e.g. Amazon Web Services EC2, Elastic MapReduce) and considerations for scalable, distributed systems NoSQL pla6orms (e.g. key-value stores, graph databases) Data modeling techniques for NoSQL data and Cloud data platforms (e.g. AWS, Azure, GCP) High-scale or distributed cloud-native data platforms
  • Experience in solution architecting, design & execution – Data lakes in CSPs (preferred Azure/GCP, AWS)

Preferred Qualifications:

  • SA (Associate) level certification in AWS or MS Azure or GCP
  • Hands-on experience of the following: Hadoop stack (e.g. MapReduce, Sqoop, Pig, Hive, HBase, Flume) Analytical tools, languages, or libraries (e.g. TensorFlow, Spark, PySpark, KNIME, etc). Related/complementary open-source software platforms and languages (e.g. Java, Python, Spark, Scala, Kubernetes, Docker, etc.)

Benefits

  • Work in a professional, dynamic and friendly environment
  • To participate fully in the social insurance regimes & health insurance
  • Official holidays according to Vietnamese regulations and many appealing annual corporate events

Contact

  • Mr. Hai, Pacotech Co. Ltd, phone: +84. 24.668.26.368, email: info@pacotech.vn, website: https://pacotech.vn

Để lại một bình luận

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *