Skip to content

test-replace-fix #53

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 8, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 19 additions & 19 deletions examples/databricks/all-purpose-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,23 +20,23 @@ Dependencies:
- Grant the CICD agent account admin role, using the page shown in Figure S5.
- Create a secret for the CICD agent, using the page shown in Figure S6. At the time you create this, you will need to safely store the client secret and client id, as prompted by the web page. These will be used below.

Now, is is convenient to use environment variables for context. Note that for our example, there is only one aws account apropos, however this is not always the case for an active professional, so while `DATABRICKS_aws_ACCOUNT_ID` is the same as `aws_ACCOUNT_ID` here, it need not always be the case. Create a file in the path `examples/databricks/all-purpose-cluster/sec/env.sh` (relative to the root of this repository) with contents of the form:
Now, is is convenient to use environment variables for context. Note that for our example, there is only one aws account apropos, however this is not always the case for an active professional, so while `DATABRICKS_AWS_ACCOUNT_ID` is the same as `AWS_ACCOUNT_ID` here, it need not always be the case. Create a file in the path `examples/databricks/all-purpose-cluster/sec/env.sh` (relative to the root of this repository) with contents of the form:

```bash
#!/usr/bin/env bash

export ASSETS_aws_REGION='us-east-1' # or wherever you want
export aws_ACCOUNT_ID='<your aws account ID>'
export ASSETS_AWS_REGION='us-east-1' # or wherever you want
export AWS_ACCOUNT_ID='<your aws account ID>'
export DATABRICKS_ACCOUNT_ID='<your databricks account ID>'
export DATABRICKS_aws_ACCOUNT_ID='<your databricks aws account ID>'
export DATABRICKS_AWS_ACCOUNT_ID='<your databricks aws account ID>'

# These need to be created by clickops under [the account level user managment page](https://accounts.cloud.databricks.com/user-management).
export DATABRICKS_CLIENT_ID='<your clickops created CICD agent client id>'
export DATABRICKS_CLIENT_SECRET='<your clickops created CICD agent client secret>'

## These can be skipped if you run on [aws cloud shell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html).
export aws_SECRET_ACCESS_KEY='<your aws secret per aws cli>'
export aws_ACCESS_KEY_ID='<your aws access key id per aws cli>'
export AWS_SECRET_ACCESS_KEY='<your aws secret per aws cli>'
export AWS_ACCESS_KEY_ID='<your aws access key id per aws cli>'

```

Expand Down Expand Up @@ -89,10 +89,10 @@ Then, do a dry run (good for catching **some** environmental issues):
```bash
stackql-deploy build \
examples/databricks/all-purpose-cluster dev \
-e aws_REGION=${ASSETS_aws_REGION} \
-e aws_ACCOUNT_ID=${aws_ACCOUNT_ID} \
-e AWS_REGION=${ASSETS_AWS_REGION} \
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
-e DATABRICKS_aws_ACCOUNT_ID=${DATABRICKS_aws_ACCOUNT_ID} \
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
--dry-run
```

Expand All @@ -104,10 +104,10 @@ Now, let use do it for real:
```bash
stackql-deploy build \
examples/databricks/all-purpose-cluster dev \
-e aws_REGION=${ASSETS_aws_REGION} \
-e aws_ACCOUNT_ID=${aws_ACCOUNT_ID} \
-e AWS_REGION=${ASSETS_AWS_REGION} \
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
-e DATABRICKS_aws_ACCOUNT_ID=${DATABRICKS_aws_ACCOUNT_ID} \
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
--show-queries
```

Expand All @@ -127,10 +127,10 @@ We can also use `stackql-deploy` to assess if our infra is shipshape:
```bash
stackql-deploy test \
examples/databricks/all-purpose-cluster dev \
-e aws_REGION=${ASSETS_aws_REGION} \
-e aws_ACCOUNT_ID=${aws_ACCOUNT_ID} \
-e AWS_REGION=${ASSETS_AWS_REGION} \
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
-e DATABRICKS_aws_ACCOUNT_ID=${DATABRICKS_aws_ACCOUNT_ID} \
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
--show-queries
```

Expand All @@ -150,17 +150,17 @@ Now, let us teardown our `stackql-deploy` managed infra:
```bash
stackql-deploy teardown \
examples/databricks/all-purpose-cluster dev \
-e aws_REGION=${ASSETS_aws_REGION} \
-e aws_ACCOUNT_ID=${aws_ACCOUNT_ID} \
-e AWS_REGION=${ASSETS_AWS_REGION} \
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
-e DATABRICKS_aws_ACCOUNT_ID=${DATABRICKS_aws_ACCOUNT_ID} \
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
--show-queries
```

Takes its time, again verbose, concludes in:

```
2025-02-08 13:24:17,941 - stackql-deploy - INFO - ✅ successfully deleted aws_iam_cross_account_role
2025-02-08 13:24:17,941 - stackql-deploy - INFO - ✅ successfully deleted AWS_iam_cross_account_role
2025-02-08 13:24:17,942 - stackql-deploy - INFO - deployment completed in 0:03:21.191788
🚧 teardown complete (dry run: False)
```
Expand Down
Loading