Skip to content

Commit 3b6a9a4

Browse files
test-replace-fix
Summary: - Set to rights idiotic text replace in walkthrough.
1 parent 2954c63 commit 3b6a9a4

File tree

1 file changed

+19
-19
lines changed
  • examples/databricks/all-purpose-cluster

1 file changed

+19
-19
lines changed

examples/databricks/all-purpose-cluster/README.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -20,23 +20,23 @@ Dependencies:
2020
- Grant the CICD agent account admin role, using the page shown in Figure S5.
2121
- Create a secret for the CICD agent, using the page shown in Figure S6. At the time you create this, you will need to safely store the client secret and client id, as prompted by the web page. These will be used below.
2222

23-
Now, is is convenient to use environment variables for context. Note that for our example, there is only one aws account apropos, however this is not always the case for an active professional, so while `DATABRICKS_aws_ACCOUNT_ID` is the same as `aws_ACCOUNT_ID` here, it need not always be the case. Create a file in the path `examples/databricks/all-purpose-cluster/sec/env.sh` (relative to the root of this repository) with contents of the form:
23+
Now, is is convenient to use environment variables for context. Note that for our example, there is only one aws account apropos, however this is not always the case for an active professional, so while `DATABRICKS_AWS_ACCOUNT_ID` is the same as `AWS_ACCOUNT_ID` here, it need not always be the case. Create a file in the path `examples/databricks/all-purpose-cluster/sec/env.sh` (relative to the root of this repository) with contents of the form:
2424

2525
```bash
2626
#!/usr/bin/env bash
2727

28-
export ASSETS_aws_REGION='us-east-1' # or wherever you want
29-
export aws_ACCOUNT_ID='<your aws account ID>'
28+
export ASSETS_AWS_REGION='us-east-1' # or wherever you want
29+
export AWS_ACCOUNT_ID='<your aws account ID>'
3030
export DATABRICKS_ACCOUNT_ID='<your databricks account ID>'
31-
export DATABRICKS_aws_ACCOUNT_ID='<your databricks aws account ID>'
31+
export DATABRICKS_AWS_ACCOUNT_ID='<your databricks aws account ID>'
3232

3333
# These need to be created by clickops under [the account level user managment page](https://accounts.cloud.databricks.com/user-management).
3434
export DATABRICKS_CLIENT_ID='<your clickops created CICD agent client id>'
3535
export DATABRICKS_CLIENT_SECRET='<your clickops created CICD agent client secret>'
3636

3737
## These can be skipped if you run on [aws cloud shell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html).
38-
export aws_SECRET_ACCESS_KEY='<your aws secret per aws cli>'
39-
export aws_ACCESS_KEY_ID='<your aws access key id per aws cli>'
38+
export AWS_SECRET_ACCESS_KEY='<your aws secret per aws cli>'
39+
export AWS_ACCESS_KEY_ID='<your aws access key id per aws cli>'
4040

4141
```
4242

@@ -89,10 +89,10 @@ Then, do a dry run (good for catching **some** environmental issues):
8989
```bash
9090
stackql-deploy build \
9191
examples/databricks/all-purpose-cluster dev \
92-
-e aws_REGION=${ASSETS_aws_REGION} \
93-
-e aws_ACCOUNT_ID=${aws_ACCOUNT_ID} \
92+
-e AWS_REGION=${ASSETS_AWS_REGION} \
93+
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
9494
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
95-
-e DATABRICKS_aws_ACCOUNT_ID=${DATABRICKS_aws_ACCOUNT_ID} \
95+
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
9696
--dry-run
9797
```
9898

@@ -104,10 +104,10 @@ Now, let use do it for real:
104104
```bash
105105
stackql-deploy build \
106106
examples/databricks/all-purpose-cluster dev \
107-
-e aws_REGION=${ASSETS_aws_REGION} \
108-
-e aws_ACCOUNT_ID=${aws_ACCOUNT_ID} \
107+
-e AWS_REGION=${ASSETS_AWS_REGION} \
108+
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
109109
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
110-
-e DATABRICKS_aws_ACCOUNT_ID=${DATABRICKS_aws_ACCOUNT_ID} \
110+
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
111111
--show-queries
112112
```
113113

@@ -127,10 +127,10 @@ We can also use `stackql-deploy` to assess if our infra is shipshape:
127127
```bash
128128
stackql-deploy test \
129129
examples/databricks/all-purpose-cluster dev \
130-
-e aws_REGION=${ASSETS_aws_REGION} \
131-
-e aws_ACCOUNT_ID=${aws_ACCOUNT_ID} \
130+
-e AWS_REGION=${ASSETS_AWS_REGION} \
131+
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
132132
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
133-
-e DATABRICKS_aws_ACCOUNT_ID=${DATABRICKS_aws_ACCOUNT_ID} \
133+
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
134134
--show-queries
135135
```
136136

@@ -150,17 +150,17 @@ Now, let us teardown our `stackql-deploy` managed infra:
150150
```bash
151151
stackql-deploy teardown \
152152
examples/databricks/all-purpose-cluster dev \
153-
-e aws_REGION=${ASSETS_aws_REGION} \
154-
-e aws_ACCOUNT_ID=${aws_ACCOUNT_ID} \
153+
-e AWS_REGION=${ASSETS_AWS_REGION} \
154+
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
155155
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
156-
-e DATABRICKS_aws_ACCOUNT_ID=${DATABRICKS_aws_ACCOUNT_ID} \
156+
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
157157
--show-queries
158158
```
159159

160160
Takes its time, again verbose, concludes in:
161161

162162
```
163-
2025-02-08 13:24:17,941 - stackql-deploy - INFO - ✅ successfully deleted aws_iam_cross_account_role
163+
2025-02-08 13:24:17,941 - stackql-deploy - INFO - ✅ successfully deleted AWS_iam_cross_account_role
164164
2025-02-08 13:24:17,942 - stackql-deploy - INFO - deployment completed in 0:03:21.191788
165165
🚧 teardown complete (dry run: False)
166166
```

0 commit comments

Comments
 (0)