Skip to content

Commit ae00df6

Browse files
authored
DOC: Correct typos in the narrative of Examples (#933)
1 parent 12aa0ad commit ae00df6

10 files changed

+55
-54
lines changed

docs/notebooks/beta_regression.ipynb

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -253,13 +253,13 @@
253253
"source": [
254254
"The model fit, but clearly these parameters are not the ones that we used above. For Beta regression, we use a linear model for the mean, so we use the $\\mu$ and $\\sigma$ formulation. To link the two, we use\n",
255255
"\n",
256-
"$\\alpha = \\mu \\kappa$\n",
256+
"$$\\alpha = \\mu \\kappa$$\n",
257257
"\n",
258-
"$\\beta = (1-\\mu)\\kappa$\n",
258+
"$$\\beta = (1-\\mu)\\kappa$$\n",
259259
"\n",
260260
"and $\\kappa$ is a function of the mean and variance,\n",
261261
"\n",
262-
"$\\kappa = \\frac{\\mu(1-\\mu)}{\\sigma^2} - 1$\n",
262+
"$$\\kappa = \\frac{\\mu(1-\\mu)}{\\sigma^2} - 1$$\n",
263263
"\n",
264264
"Rather than $\\sigma$, you'll note Bambi returns $\\kappa$. We'll define a function to retrieve our original parameters."
265265
]
@@ -1620,9 +1620,9 @@
16201620
],
16211621
"metadata": {
16221622
"kernelspec": {
1623-
"display_name": "dev",
1623+
"display_name": "Python [conda env:base] *",
16241624
"language": "python",
1625-
"name": "python3"
1625+
"name": "conda-base-py"
16261626
},
16271627
"language_info": {
16281628
"codemirror_mode": {
@@ -1634,7 +1634,7 @@
16341634
"name": "python",
16351635
"nbconvert_exporter": "python",
16361636
"pygments_lexer": "ipython3",
1637-
"version": "3.13.7"
1637+
"version": "3.11.5"
16381638
}
16391639
},
16401640
"nbformat": 4,

docs/notebooks/circular_regression.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,9 @@
3333
"source": [
3434
"Directional statistics, also known as circular statistics or spherical statistics, refers to a branch of statistics dealing with data which domain is the unit circle, as opposed to \"linear\" data which support is the real line. Circular data is convenient when dealing with directions or rotations. Some examples include temporal periods like hours or days, compass directions, dihedral angles in biomolecules, etc.\n",
3535
"\n",
36-
"The fact that a Sunday can be both the day before or after a Monday, or that 0 is a \"better average\" for 2 and 358 degrees than 180 are illustrations that circular data and circular statistical methods are better equipped to deal with this kind of problem than the more familiar methods [1](https://en.wikipedia.org/wiki/Directional_statistics).\n",
36+
"The fact that a Sunday can be both the day before or after a Monday, or that 0 is a \"better average\" for 2 and 358 degrees than 180 are illustrations that circular data and circular statistical methods are better equipped to deal with this kind of problem than the more familiar methods [\\[1\\]](https://en.wikipedia.org/wiki/Directional_statistics).\n",
3737
"\n",
38-
"There are a few circular distributions, one of them is the [VonMises distribution](https://en.wikipedia.org/wiki/Von_Mises_distribution), that we can think as the cousin of the Gaussian that lives in circular space. The domain of this distribution is any interval of length $2\\pi$. We are going to adopt the convention that the interval goes from $-\\pi$ to $\\pi$, so for example 0 radians is the same as $2\\pi$. The VonMises is defined using two parameters, the mean $\\mu$ (the circular mean) and the concentration $\\kappa$, with $\\frac{1}{\\kappa}$ being analogue of the variance. Let see a few example of the VonMises family:"
38+
"There are a few circular distributions, one of them is the [VonMises distribution](https://en.wikipedia.org/wiki/Von_Mises_distribution), that we can think as the cousin of the Gaussian that lives in circular space. The domain of this distribution is any interval of length $2\\pi$. We are going to adopt the convention that the interval goes from $-\\pi$ to $\\pi$, so for example 0 radians is the same as $2\\pi$. The VonMises is defined using two parameters, the mean $\\mu$ (the circular mean) and the concentration $\\kappa$, with $\\frac{1}{\\kappa}$ being analogue of the variance. Let see a few examples of the VonMises family:"
3939
]
4040
},
4141
{
@@ -536,9 +536,9 @@
536536
],
537537
"metadata": {
538538
"kernelspec": {
539-
"display_name": "dev",
539+
"display_name": "Python [conda env:base] *",
540540
"language": "python",
541-
"name": "python3"
541+
"name": "conda-base-py"
542542
},
543543
"language_info": {
544544
"codemirror_mode": {
@@ -550,7 +550,7 @@
550550
"name": "python",
551551
"nbconvert_exporter": "python",
552552
"pygments_lexer": "ipython3",
553-
"version": "3.13.7"
553+
"version": "3.11.5"
554554
}
555555
},
556556
"nbformat": 4,

docs/notebooks/distributional_models.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
"\n",
3838
"Instead, with distributional models we can specify predictor terms for all parameters of the response distribution. This can be useful, for example, to model heteroskedasticity, i.e. unequal variance. In this notebook we are going to do exactly that. \n",
3939
"\n",
40-
"To better understand distributional models, let's begin fitting a non-distributional models. We are going to model the following syntetic dataset. And we are going to use a Gamma response with a `log` link function."
40+
"To better understand distributional models, let's begin fitting a non-distributional models. We are going to model the following synthetic dataset. And we are going to use a Gamma response with a `log` link function."
4141
]
4242
},
4343
{
@@ -615,7 +615,7 @@
615615
"cell_type": "markdown",
616616
"metadata": {},
617617
"source": [
618-
"This is nice statistical art and a good insight into what the model is actully doing. But at this point you may be wondering how results looks like and more important how different they are from `model_constant`. Let's plot the mean and predictions as we did before, but for both models."
618+
"This is nice statistical art and a good insight into what the model is actually doing. But at this point you may be wondering how the results look like, and more importantly, how different they are from `model_constant`. Let's plot the mean and predictions as we did before, but for both models."
619619
]
620620
},
621621
{
@@ -672,9 +672,9 @@
672672
"cell_type": "markdown",
673673
"metadata": {},
674674
"source": [
675-
"We can see that mean is virtually the same for both model but the predictions are not, in particular for larger values of the predictiors. \n",
675+
"We can see that mean is virtually the same for both models but the predictions are not, in particular for larger values of the predictiors. \n",
676676
"\n",
677-
"We can also check that the models actually looks different under the LOO metric, with a slight preference for the varying model."
677+
"We can also check that the models actually look different under the LOO metric, with a slight preference for the varying model."
678678
]
679679
},
680680
{
@@ -951,9 +951,9 @@
951951
],
952952
"metadata": {
953953
"kernelspec": {
954-
"display_name": "dev",
954+
"display_name": "Python [conda env:base] *",
955955
"language": "python",
956-
"name": "python3"
956+
"name": "conda-base-py"
957957
},
958958
"language_info": {
959959
"codemirror_mode": {
@@ -965,7 +965,7 @@
965965
"name": "python",
966966
"nbconvert_exporter": "python",
967967
"pygments_lexer": "ipython3",
968-
"version": "3.13.7"
968+
"version": "3.11.5"
969969
}
970970
},
971971
"nbformat": 4,

docs/notebooks/hsgp_1d.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -599,7 +599,7 @@
599599
"source": [
600600
"## How does `hsgp()` work?\n",
601601
"\n",
602-
"`hsgp()` is a transformation that is available in the namespace where the model formula is evaluated. In plain english, `hsgp()` is like a function you can use in your model formulas. You don't need to worry about the details, Bambi knows how to handle them.But if still you want to see the actual code, you can have a look at the implementation of the `HSGP` class in [bambi/transformations.py](https://github.com/bambinos/bambi/blob/main/bambi/transformations.py).\n",
602+
"`hsgp()` is a transformation that is available in the namespace where the model formula is evaluated. In plain English, `hsgp()` is like a function you can use in your model formulas. You don't need to worry about the details, Bambi knows how to handle them. But if still you want to see the actual code, you can have a look at the implementation of the `HSGP` class in [bambi/transformations.py](https://github.com/bambinos/bambi/blob/main/bambi/transformations.py).\n",
603603
"\n",
604604
"What users do need to care about is the arguments the `hsgp()` transformation support. There are a bunch of arguments that can be passed after the variable number of non-keyword arguments representing the variables of the HSGP contribution. Below is a brief overview of these arguments and their respective descriptions.\n",
605605
"\n",
@@ -610,7 +610,7 @@
610610
"* `cov`: This argument specifies the name of the covariance function to be used. The default value is `\"ExpQuad\"`.\n",
611611
"* `share_cov`: Determines whether the same covariance function is shared across all groups. This argument is relevant only when by is not `None` and the default value is `True`. \n",
612612
"* `scale`: When set to `True`, the predictors are be rescaled such that the largest Euclidean distance between two points is 1. This adjustment often improves the sampling speed and convergence. \n",
613-
"* `iso`: Determines whether to use an isotropic or non-isotropic Gaussian Process. With an isotropic GP, the same level of smoothing is applied to all predictors, while a anisotropic GP allows different levels of smoothing for individual predictors. Note that this argument is ignored if only one predictor is provided. The default value is `True`.\n",
613+
"* `iso`: Determines whether to use an isotropic or anisotropic (non-isotropic) Gaussian Process. With an isotropic GP, the same level of smoothing is applied to all predictors, while an anisotropic GP allows different levels of smoothing for individual predictors. Note that this argument is ignored if only one predictor is provided. The default value is `True`.\n",
614614
"* `drop_first`: Whether to exclude the first basis vector or not. The default value is `False`.\n",
615615
"* `centered`: Whether to use the centered or the non-centered parametrization. Defaults to `False`.\n",
616616
"\n",
@@ -1546,9 +1546,9 @@
15461546
],
15471547
"metadata": {
15481548
"kernelspec": {
1549-
"display_name": "dev",
1549+
"display_name": "Python [conda env:base] *",
15501550
"language": "python",
1551-
"name": "python3"
1551+
"name": "conda-base-py"
15521552
},
15531553
"language_info": {
15541554
"codemirror_mode": {
@@ -1560,7 +1560,7 @@
15601560
"name": "python",
15611561
"nbconvert_exporter": "python",
15621562
"pygments_lexer": "ipython3",
1563-
"version": "3.13.7"
1563+
"version": "3.11.5"
15641564
}
15651565
},
15661566
"nbformat": 4,

docs/notebooks/multi-level_regression.ipynb

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -390,7 +390,8 @@
390390
"\n",
391391
"`az.plot_trace(results, var_names=[\"~1|Pig_sigma\", \"~Time|Pig_sigma\"], compact=True);`\n",
392392
"\n",
393-
"which uses an alternative notation to pass `var_names` based on the negation symbol in Python, `~`. There we are telling ArviZ to plot all the variables in the InferenceData object `results`, except from `1|Pig_sigma` and `Time|Pig_sigma`. \n",
393+
"which uses an alternative notation to pass `var_names` based on the negation symbol in Python, `~`. There we are telling ArviZ to plot all the variables in the InferenceData object `results`, except from `1|Pig_sigma` and `Time|Pig_sigma`. \n",
394+
"\n",
394395
"Can't believe it? Come on, run this notebook on your side and have a try! "
395396
]
396397
},
@@ -629,7 +630,7 @@
629630
"* The credible interval for `Time` is far away from 0, so we can be confident there's a positive relationship\n",
630631
"the `Weight` of the pigs and `Time`. \n",
631632
"\n",
632-
"We're not making any great discovering by stating that as time passes we expect the pigs to weight more, but this very simple example can be used as a starting point in applications where the relationship between the variables\n",
633+
"We're not making any great discoveries by stating that as time passes we expect the pigs to weigh more, but this very simple example can be used as a starting point in applications where the relationship between the variables\n",
633634
"is not that clear beforehand."
634635
]
635636
},
@@ -725,9 +726,9 @@
725726
],
726727
"metadata": {
727728
"kernelspec": {
728-
"display_name": "dev",
729+
"display_name": "Python [conda env:base] *",
729730
"language": "python",
730-
"name": "python3"
731+
"name": "conda-base-py"
731732
},
732733
"language_info": {
733734
"codemirror_mode": {
@@ -739,7 +740,7 @@
739740
"name": "python",
740741
"nbconvert_exporter": "python",
741742
"pygments_lexer": "ipython3",
742-
"version": "3.13.7"
743+
"version": "3.11.5"
743744
}
744745
},
745746
"nbformat": 4,

docs/notebooks/orthogonal_polynomial_reg.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1479,7 +1479,7 @@
14791479
"id": "433969dd",
14801480
"metadata": {},
14811481
"source": [
1482-
"Looking at this plot with the 68% and 95% CIs shown, the fit looks _okay_. Most notably, at about 160 hp, then the data diverge from the fit pretty drastically. The fit at low hp values isn't particularly good either, there's quite a bit that falls outside of our 95% CI. This can be accented pretty heavily by looking at the the residuals from the mean of the model."
1482+
"Looking at this plot with the 68% and 95% CIs shown, the fit looks _okay_. Most notably, at about 160 hp, then the data diverge from the fit pretty drastically. The fit at low hp values isn't particularly good either, there's quite a bit that falls outside of our 95% CI. This can be accented pretty heavily by looking at the residuals from the mean of the model."
14831483
]
14841484
},
14851485
{
@@ -2339,9 +2339,9 @@
23392339
],
23402340
"metadata": {
23412341
"kernelspec": {
2342-
"display_name": "dev",
2342+
"display_name": "Python [conda env:base] *",
23432343
"language": "python",
2344-
"name": "python3"
2344+
"name": "conda-base-py"
23452345
},
23462346
"language_info": {
23472347
"codemirror_mode": {
@@ -2353,7 +2353,7 @@
23532353
"name": "python",
23542354
"nbconvert_exporter": "python",
23552355
"pygments_lexer": "ipython3",
2356-
"version": "3.13.7"
2356+
"version": "3.11.5"
23572357
}
23582358
},
23592359
"nbformat": 4,

docs/notebooks/quantile_regression.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -45,9 +45,9 @@
4545
"source": [
4646
"## Asymmetric Laplace distribution\n",
4747
"\n",
48-
"At first it could be weird to think which distribution we should use as the likelihood for quantile regression or how to write a Bayesian model for quantile regression. But it turns out the answer is quite simple, we just need to use the asymmetric Laplace distribution. This distribution has one parameter controling the mean, another for the scale and a third one for the asymmetry. There are at least two alternative parametrizations regarding this asymmetric parameter. In terms of $\\kappa$ a parameter that goes from 0 to $\\infty$ and in terms of $q$ a number between 0 and 1. This later parametrization is more intuitive for quantile regression as we can directly interpre it as the quantile of interest.\n",
48+
"At first it could be weird to think which distribution we should use as the likelihood for quantile regression or how to write a Bayesian model for quantile regression. But it turns out the answer is quite simple, we just need to use the asymmetric Laplace distribution. This distribution has one parameter controling the mean, another for the scale and a third one for the asymmetry. There are at least two alternative parametrizations regarding this asymmetric parameter. In terms of $\\kappa$ a parameter that goes from 0 to $\\infty$ and in terms of $q$ a number between 0 and 1. This later parametrization is more intuitive for quantile regression as we can directly interpret it as the quantile of interest.\n",
4949
"\n",
50-
"On the next cell we compute the pdf of 3 distribution from the Asymmetric Laplace family"
50+
"On the next cell we compute the pdf of 3 distributions from the Asymmetric Laplace family:"
5151
]
5252
},
5353
{
@@ -524,9 +524,9 @@
524524
],
525525
"metadata": {
526526
"kernelspec": {
527-
"display_name": "dev",
527+
"display_name": "Python [conda env:base] *",
528528
"language": "python",
529-
"name": "python3"
529+
"name": "conda-base-py"
530530
},
531531
"language_info": {
532532
"codemirror_mode": {
@@ -538,7 +538,7 @@
538538
"name": "python",
539539
"nbconvert_exporter": "python",
540540
"pygments_lexer": "ipython3",
541-
"version": "3.13.7"
541+
"version": "3.11.5"
542542
}
543543
},
544544
"nbformat": 4,

docs/notebooks/splines_cherry_blossoms.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@
197197
"cell_type": "markdown",
198198
"metadata": {},
199199
"source": [
200-
"The variable we are interested in modeling is `\"doy\"`, which stands for Day of Year. Also notice this variable contains several missing value which are discarded next."
200+
"The variable we are interested in modeling is `\"doy\"`, which stands for Day of Year. Also notice this variable contains several missing values which are discarded next."
201201
]
202202
},
203203
{
@@ -1061,7 +1061,7 @@
10611061
"cell_type": "markdown",
10621062
"metadata": {},
10631063
"source": [
1064-
"Because it's not something that you're supposed to consult regularly, Bambi does not expose the design matrix. However, with a some knowledge of the internals, it is possible to have access to it:"
1064+
"Because it's not something that you're supposed to consult regularly, Bambi does not expose the design matrix. However, with some knowledge of the internals, it is possible to have access to it:"
10651065
]
10661066
},
10671067
{
@@ -1871,9 +1871,9 @@
18711871
],
18721872
"metadata": {
18731873
"kernelspec": {
1874-
"display_name": "dev",
1874+
"display_name": "Python [conda env:base] *",
18751875
"language": "python",
1876-
"name": "python3"
1876+
"name": "conda-base-py"
18771877
},
18781878
"language_info": {
18791879
"codemirror_mode": {
@@ -1885,7 +1885,7 @@
18851885
"name": "python",
18861886
"nbconvert_exporter": "python",
18871887
"pygments_lexer": "ipython3",
1888-
"version": "3.13.7"
1888+
"version": "3.11.5"
18891889
}
18901890
},
18911891
"nbformat": 4,

0 commit comments

Comments
 (0)