Getting Started with AWS

Connecting your AWS accounts to hava.

When you log into Hava for the first time, you will be presented with the opportunity to import some demo environments and also jump right in to connecting to your own AWS, Azure or GCP accounts :

The first step in creating accurate AWS infrastructure diagrams with Hava is to connect Hava to your AWS account.

We strongly advise creating a Cross Account Role to allow access to your AWS environment. Hava is built on AWS and this method is considered AWS best practice.

You may also create a new IAM user with Read Only Permissions. Either way, there can be no doubt from an infrastructure integrity and security perspective that Hava cannot change or update anything in your environment and is limited to reading the data it needs to visualise your AWS environment.

You may also create a Minimum Access Read Only IAM User with customisable permissions if you wish to exclude access to any components of your AWS environment.

How to create a Cross Account Role

From the Hava Environments screen - select "Add Environments" :

In a separate browser tab - log in to your AWS Console. Navigate back to Hava then :

Select the Amazon Tab.

Select "Cross Account Role"

Click on the "Jump to AWS Console and create read only account role" link. This will open up your AWS console in the Create Role dialogue with the fields pre-filled :

Ensure the Account ID and External ID match the dialogue window in Hava.

Ensure "Require MFA" remains unchecked

Click "Next: Permissions" and "Create Policy" :

Select the JSON tab

Paste in the following JSON code

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"acm:DescribeCertificate",
"acm:GetCertificate",
"acm:ListCertificates",
"apigateway:GET",
"apigateway:HEAD",
"apigateway:OPTIONS",
"appstream:Get*",
"autoscaling:Describe*",
"cloudformation:DescribeStackEvents",
"cloudformation:DescribeStackResource",
"cloudformation:DescribeStackResources",
"cloudformation:DescribeStacks",
"cloudformation:GetTemplate",
"cloudformation:List*",
"cloudfront:Get*",
"cloudfront:List*",
"cloudsearch:Describe*",
"cloudsearch:List*",
"cloudtrail:DescribeTrails",
"cloudtrail:GetTrailStatus",
"cloudwatch:Describe*",
"cloudwatch:Get*",
"cloudwatch:List*",
"codecommit:BatchGetRepositories",
"codecommit:Get*",
"codecommit:GitPull",
"codecommit:List*",
"codedeploy:Batch*",
"codedeploy:Get*",
"codedeploy:List*",
"config:Deliver*",
"config:Describe*",
"config:Get*",
"datapipeline:DescribeObjects",
"datapipeline:DescribePipelines",
"datapipeline:EvaluateExpression",
"datapipeline:GetPipelineDefinition",
"datapipeline:ListPipelines",
"datapipeline:QueryObjects",
"datapipeline:ValidatePipelineDefinition",
"directconnect:Describe*",
"ds:Check*",
"ds:Describe*",
"ds:Get*",
"ds:List*",
"ds:Verify*",
"dynamodb:DescribeTable",
"dynamodb:ListTables",
"ec2:Describe*",
"ec2:GetConsoleOutput",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetManifest",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage",
"ecs:Describe*",
"ecs:List*",
"elasticache:Describe*",
"elasticache:List*",
"elasticbeanstalk:Check*",
"elasticbeanstalk:Describe*",
"elasticbeanstalk:List*",
"elasticbeanstalk:RequestEnvironmentInfo",
"elasticbeanstalk:RetrieveEnvironmentInfo",
"elasticfilesystem:DescribeMountTargets",
"elasticfilesystem:DescribeTags",
"elasticfilesystem:DescribeFileSystems",
"elasticfilesystem:DescribeMountTargetSecurityGroups",
"elasticloadbalancing:Describe*",
"elasticmapreduce:Describe*",
"elasticmapreduce:List*",
"elastictranscoder:List*",
"elastictranscoder:Read*",
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ListDomainNames",
"es:ListTags",
"es:ESHttpGet",
"es:ESHttpHead",
"events:DescribeRule",
"events:ListRuleNamesByTarget",
"events:ListRules",
"events:ListTargetsByRule",
"events:TestEventPattern",
"firehose:Describe*",
"firehose:List*",
"glacier:ListVaults",
"glacier:DescribeVault",
"glacier:GetDataRetrievalPolicy",
"glacier:GetVaultAccessPolicy",
"glacier:GetVaultLock",
"glacier:GetVaultNotifications",
"glacier:ListJobs",
"glacier:ListMultipartUploads",
"glacier:ListParts",
"glacier:ListTagsForVault",
"glacier:DescribeJob",
"glacier:GetJobOutput",
"iam:GenerateCredentialReport",
"iam:Get*",
"iam:List*",
"inspector:Describe*",
"inspector:Get*",
"inspector:List*",
"inspector:LocalizeText",
"inspector:PreviewAgentsForResourceGroup",
"iot:Describe*",
"iot:Get*",
"iot:List*",
"kinesis:Describe*",
"kinesis:Get*",
"kinesis:List*",
"kms:Describe*",
"kms:Get*",
"kms:List*",
"lambda:List*",
"lambda:Get*",
"logs:Describe*",
"logs:Get*",
"logs:TestMetricFilter",
"machinelearning:Describe*",
"machinelearning:Get*",
"mobilehub:GetProject",
"mobilehub:ListAvailableFeatures",
"mobilehub:ListAvailableRegions",
"mobilehub:ListProjects",
"mobilehub:ValidateProject",
"mobilehub:VerifyServiceRole",
"opsworks:Describe*",
"opsworks:Get*",
"rds:Describe*",
"rds:ListTagsForResource",
"redshift:Describe*",
"redshift:ViewQueriesInConsole",
"route53:Get*",
"route53:List*",
"route53domains:CheckDomainAvailability",
"route53domains:GetDomainDetail",
"route53domains:GetOperationDetail",
"route53domains:ListDomains",
"route53domains:ListOperations",
"route53domains:ListTagsForDomain",
"s3:GetAccelerateConfiguration",
"s3:GetAnalyticsConfiguration",
"s3:GetBucket*",
"s3:GetInventoryConfiguration",
"s3:GetIpConfiguration",
"s3:GetLifecycleConfiguration",
"s3:GetMetricsConfiguration",
"s3:GetReplicationConfiguration",
"s3:List*",
"sdb:GetAttributes",
"sdb:List*",
"sdb:Select*",
"ses:Get*",
"ses:List*",
"sns:Get*",
"sns:List*",
"sqs:GetQueueAttributes",
"sqs:ListQueues",
"sqs:ReceiveMessage",
"storagegateway:Describe*",
"storagegateway:List*",
"swf:Count*",
"swf:Describe*",
"swf:Get*",
"swf:List*",
"tag:Get*",
"trustedadvisor:Describe*",
"waf:Get*",
"waf:List*",
"waf-regional:Get*",
"waf-regional:List*",
"workspaces:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

Note : the list of resources Hava requests access to ensures the most detailed diagrams and logging of environment changes, which is an incredibly powerful tool to quickly resolve environment changes that may have caused unforeseen issues in your production environment.

You can of course remove any access you do not feel comfortable with, bearing in mind, that may detract from the detailed analysis of your AWS environment both now and when new features are released.

Then click "Review Policy" & Name the new policy

Click "Create Policy" and the new policy will be created.

This process happened in a separate browser window, so return to the window where you were creating the new cross account role :

Press the refresh button & filter for the name you gave the new policy :

Select "Next:Tags" - you can skip this

Select "Next: Review" :

Click "Create Role" then select the new role from the list displayed.

Copy the Role ARN

Paste the Role ARN into the Hava dialogue box, add an optional name and click "Import"

Hava will connect to your environment and pull back the resources and relationships between them and build a complete visualisation of your environment.

From this point on Hava will sync with your AWS environment every hour and keep track of any structural changes from a VPS level down.

How to create a Read Only IAM User

Using a cross account role is AWS best practice and the preferred method to enable Hava to build your environment diagrams and log changes. If you prefer to set up access via a key pair, then follow these instructions.

Log in to your AWS console & open the Services menu.

Select IAM from the Security, Identity & Compliance options :

Select Users :

Click "Add User" :

Enter a memorable User Name and set the access type to "Programmatic Access"

Click "Next Permissions" to move to the set permissions dialogue.

Select "Attach existing policies directly"

Scroll through the policies : locate and select "ReadOnlyAccess" :

Click Next to advance to the "Add tags" dialogue. Skip this step.

Click "Next : Review" to advance to the review screen :

Click "Create User" :

You will get a screen confirming successful creation of the new user and an Access Key ID and Secret Access Key credentials. You can write these down, however to ensure accuracy we advise downloading the credentials.csv file and cutting & pasting the user credentials from there.

You now have the necessary user and credentials to connect Hava to your AWS environment.

Open the Hava Environments workspace and select Add Environments :

Enter the Access Key and Secret Key from the previous step and click "Import" :

Hava will now import your environment components, construct the diagrams and start logging changes as they happen.