Ambari 2.7 and HDP 3.0 - Stack HDP 3.0 is not found in Ambari metainfo
See original GitHub issueHere are my initial values:
cluster_name: 'hdp3'
ambari_version: '2.7.0.0' # must be the 4-part full version number
hdp_version: '3.0.0.0' # must be the 4-part full version number
hdp_build_number: '1634' # the HDP build number from docs.hortonworks.com (if set to 'auto', Ansible will try to get it from the repository)
Here is the error message:
An internal system exception occurred: Stack data, Stack HDP 3.0 is not found in Ambari metainfo
It fails on this task: TASK [ambari-config : Register the VDF with Ambari (Ambari >= 2.6)] ************
I see Ambari 2.6.2.2 is installed This worked some weeks ago, did I miss something?
Issue Analytics
- State:
- Created 5 years ago
- Comments:7 (4 by maintainers)
Top Results From Across the Web
Upgrade HDP 3.0.1.0 to HDP 3.1.0: Error in Reading Version ...
An internal system exception occurred : Stack data, Stack HDP 3.1 is not found in Ambari metainfo". What am I doing wrong ?...
Read more >subject:"install HDP3.0 using ambari" - The Mail Archive
Ambari 2.7 cannot create ranger repositories for components such as hdfs. ... Stack >>> HDP 3.0 is not found in Ambari metainfo" >>>...
Read more >How-To Define Stacks and Services
Services managed by Ambari are defined in its stacks folder. To define your own services and stacks to be managed by Ambari, ...
Read more >ERROR: "Failed to create a default repository version ... - Search
Failed to create a default repository version definition for stack HDP-2.6. This typically is a result of not loading the stack correctly or ......
Read more >Ambari UI not showing versions in cluster installation
When you are building ambari 2.7.5 from source, you need to feed it a stack. HDP was one of them, that is no...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

@markokole not sure I understand, if you use --extra-vars you’ll override any variable so you have to use the same --extra-vars everywhere. Or better, use the
install_clusterscript rather than the individual scripts. The--extra-varsoption should be used very rarely (when testing something for example or to override a simple variable) and you should only rely ongroup_vars/allfor everything else and maintain it. Some configs depend on one another and onlygroup_vars/allwill offer the full picture of what is installed and should be running (I’m using here the “One Mona Lisa” principle as recommended by Ansible).@lhoss I’ll update the versions later today as I’m testing some combinations. I was waiting for the first maintenance release to upgrade more defaults (for example if the default hdp version change then the default
blueprint_dynamicvariable will also need to change).And yes, please do create issues and share ideas as I can’t be aware of how people are using this 😃
@markokole In fact I used the defaults:
This worked for me for both HDP versions: 3.0.1.0 and 3.0.0.0 ! Actually I just saw the latest commit in master, using the latest versions: https://github.com/hortonworks/ansible-hortonworks/commit/9bf56742650e03e7762bbe559d497bbbeba89529 and also still the default build_numbers