From 2d2b463baecba3d09f2af1ab5db090b10fd1b415 Mon Sep 17 00:00:00 2001 From: jmbeach Date: Fri, 25 Feb 2022 14:56:35 -0500 Subject: [PATCH] replace markdown pipe symbols --- .jekyll-metadata | Bin 2682046 -> 3337217 bytes _site/404.html | 2 +- _site/README.md | 4 +- .../ex_1/index.html | 2 +- .../ex_10/index.html | 2 +- .../ex_11/index.html | 2 +- .../ex_12/index.html | 2 +- .../ex_13/index.html | 2 +- .../ex_14/index.html | 2 +- .../ex_15/index.html | 2 +- .../ex_2/index.html | 2 +- .../ex_3/index.html | 2 +- .../ex_4/index.html | 2 +- .../ex_5/index.html | 2 +- .../ex_6/index.html | 2 +- .../ex_7/index.html | 2 +- .../ex_8/index.html | 2 +- .../ex_9/index.html | 2 +- _site/advanced-planning-exercises/index.html | 2 +- .../advanced-search-exercises/ex_1/index.html | 2 +- .../ex_10/index.html | 2 +- .../ex_11/index.html | 2 +- .../ex_12/index.html | 2 +- .../ex_13/index.html | 2 +- .../ex_14/index.html | 2 +- .../ex_15/index.html | 2 +- .../ex_16/index.html | 2 +- .../ex_17/index.html | 2 +- .../advanced-search-exercises/ex_2/index.html | 2 +- .../advanced-search-exercises/ex_3/index.html | 2 +- .../advanced-search-exercises/ex_4/index.html | 2 +- .../advanced-search-exercises/ex_5/index.html | 2 +- .../advanced-search-exercises/ex_6/index.html | 2 +- .../advanced-search-exercises/ex_7/index.html | 2 +- .../advanced-search-exercises/ex_8/index.html | 2 +- .../advanced-search-exercises/ex_9/index.html | 2 +- _site/advanced-search-exercises/index.html | 2 +- _site/agents-exercises/ex_1/index.html | 2 +- _site/agents-exercises/ex_10/index.html | 2 +- _site/agents-exercises/ex_11/index.html | 2 +- _site/agents-exercises/ex_12/index.html | 2 +- _site/agents-exercises/ex_13/index.html | 2 +- _site/agents-exercises/ex_14/index.html | 2 +- _site/agents-exercises/ex_15/index.html | 2 +- _site/agents-exercises/ex_16/index.html | 2 +- _site/agents-exercises/ex_2/index.html | 2 +- _site/agents-exercises/ex_3/index.html | 2 +- _site/agents-exercises/ex_4/index.html | 2 +- _site/agents-exercises/ex_5/index.html | 2 +- _site/agents-exercises/ex_6/index.html | 2 +- _site/agents-exercises/ex_7/index.html | 2 +- _site/agents-exercises/ex_8/index.html | 2 +- _site/agents-exercises/ex_9/index.html | 2 +- _site/agents-exercises/index.html | 2 +- _site/answersubmitted/index.html | 2 +- _site/bayes-nets-exercises/ex_1/index.html | 2 +- _site/bayes-nets-exercises/ex_10/index.html | 2 +- _site/bayes-nets-exercises/ex_11/index.html | 6 +- _site/bayes-nets-exercises/ex_12/index.html | 2 +- _site/bayes-nets-exercises/ex_13/index.html | 2 +- _site/bayes-nets-exercises/ex_14/index.html | 6 +- _site/bayes-nets-exercises/ex_15/index.html | 6 +- _site/bayes-nets-exercises/ex_16/index.html | 2 +- _site/bayes-nets-exercises/ex_17/index.html | 2 +- _site/bayes-nets-exercises/ex_18/index.html | 10 +-- _site/bayes-nets-exercises/ex_19/index.html | 2 +- _site/bayes-nets-exercises/ex_2/index.html | 2 +- _site/bayes-nets-exercises/ex_20/index.html | 2 +- _site/bayes-nets-exercises/ex_21/index.html | 6 +- _site/bayes-nets-exercises/ex_22/index.html | 2 +- _site/bayes-nets-exercises/ex_23/index.html | 14 ++-- _site/bayes-nets-exercises/ex_24/index.html | 2 +- _site/bayes-nets-exercises/ex_3/index.html | 30 +++---- _site/bayes-nets-exercises/ex_4/index.html | 2 +- _site/bayes-nets-exercises/ex_5/index.html | 2 +- _site/bayes-nets-exercises/ex_6/index.html | 2 +- _site/bayes-nets-exercises/ex_7/index.html | 2 +- _site/bayes-nets-exercises/ex_8/index.html | 2 +- _site/bayes-nets-exercises/ex_9/index.html | 2 +- _site/bayes-nets-exercises/index.html | 34 ++++---- .../ex_1/index.html | 2 +- .../ex_10/index.html | 2 +- .../ex_11/index.html | 2 +- .../ex_2/index.html | 2 +- .../ex_3/index.html | 2 +- .../ex_4/index.html | 2 +- .../ex_5/index.html | 2 +- .../ex_6/index.html | 2 +- .../ex_7/index.html | 2 +- .../ex_8/index.html | 2 +- .../ex_9/index.html | 2 +- _site/bayesian-learning-exercises/index.html | 2 +- _site/bookmarks/index.html | 2 +- .../ex_1/index.html | 2 +- .../ex_10/index.html | 2 +- .../ex_11/index.html | 2 +- .../ex_12/index.html | 2 +- .../ex_13/index.html | 2 +- .../ex_14/index.html | 2 +- .../ex_15/index.html | 2 +- .../ex_16/index.html | 2 +- .../ex_17/index.html | 2 +- .../ex_18/index.html | 2 +- .../ex_19/index.html | 2 +- .../ex_2/index.html | 2 +- .../ex_20/index.html | 2 +- .../ex_21/index.html | 2 +- .../ex_22/index.html | 2 +- .../ex_23/index.html | 2 +- .../ex_24/index.html | 2 +- .../ex_25/index.html | 2 +- .../ex_3/index.html | 2 +- .../ex_4/index.html | 2 +- .../ex_5/index.html | 2 +- .../ex_6/index.html | 2 +- .../ex_7/index.html | 2 +- .../ex_8/index.html | 2 +- .../ex_9/index.html | 2 +- _site/complex-decisions-exercises/index.html | 2 +- .../ex_1/index.html | 2 +- .../ex_10/index.html | 2 +- .../ex_11/index.html | 2 +- .../ex_12/index.html | 2 +- .../ex_13/index.html | 2 +- .../ex_14/index.html | 2 +- .../ex_15/index.html | 2 +- .../ex_16/index.html | 2 +- .../ex_17/index.html | 2 +- .../ex_18/index.html | 2 +- .../ex_19/index.html | 2 +- .../ex_2/index.html | 2 +- .../ex_20/index.html | 2 +- .../ex_21/index.html | 2 +- .../ex_22/index.html | 2 +- .../ex_23/index.html | 2 +- .../ex_24/index.html | 2 +- .../ex_25/index.html | 2 +- .../ex_26/index.html | 2 +- .../ex_27/index.html | 2 +- .../ex_28/index.html | 2 +- .../ex_29/index.html | 2 +- .../ex_3/index.html | 2 +- .../ex_30/index.html | 2 +- .../ex_31/index.html | 2 +- .../ex_32/index.html | 2 +- .../ex_33/index.html | 2 +- .../ex_4/index.html | 2 +- .../ex_5/index.html | 2 +- .../ex_6/index.html | 2 +- .../ex_7/index.html | 2 +- .../ex_8/index.html | 2 +- .../ex_9/index.html | 2 +- _site/concept-learning-exercises/index.html | 2 +- _site/csp-exercises/ex_1/index.html | 2 +- _site/csp-exercises/ex_10/index.html | 2 +- _site/csp-exercises/ex_11/index.html | 2 +- _site/csp-exercises/ex_12/index.html | 2 +- _site/csp-exercises/ex_13/index.html | 2 +- _site/csp-exercises/ex_14/index.html | 2 +- _site/csp-exercises/ex_15/index.html | 2 +- _site/csp-exercises/ex_16/index.html | 2 +- _site/csp-exercises/ex_17/index.html | 2 +- _site/csp-exercises/ex_18/index.html | 2 +- _site/csp-exercises/ex_19/index.html | 2 +- _site/csp-exercises/ex_2/index.html | 2 +- _site/csp-exercises/ex_20/index.html | 2 +- _site/csp-exercises/ex_3/index.html | 2 +- _site/csp-exercises/ex_4/index.html | 2 +- _site/csp-exercises/ex_5/index.html | 2 +- _site/csp-exercises/ex_6/index.html | 2 +- _site/csp-exercises/ex_7/index.html | 2 +- _site/csp-exercises/ex_8/index.html | 2 +- _site/csp-exercises/ex_9/index.html | 2 +- _site/csp-exercises/index.html | 2 +- _site/dbn-exercises/ex_1/index.html | 2 +- _site/dbn-exercises/ex_10/index.html | 2 +- _site/dbn-exercises/ex_11/index.html | 2 +- _site/dbn-exercises/ex_12/index.html | 2 +- _site/dbn-exercises/ex_13/index.html | 2 +- _site/dbn-exercises/ex_14/index.html | 2 +- _site/dbn-exercises/ex_15/index.html | 2 +- _site/dbn-exercises/ex_16/index.html | 2 +- _site/dbn-exercises/ex_17/index.html | 2 +- _site/dbn-exercises/ex_18/index.html | 2 +- _site/dbn-exercises/ex_19/index.html | 2 +- _site/dbn-exercises/ex_2/index.html | 2 +- _site/dbn-exercises/ex_20/index.html | 2 +- _site/dbn-exercises/ex_3/index.html | 2 +- _site/dbn-exercises/ex_4/index.html | 2 +- _site/dbn-exercises/ex_5/index.html | 2 +- _site/dbn-exercises/ex_6/index.html | 2 +- _site/dbn-exercises/ex_7/index.html | 2 +- _site/dbn-exercises/ex_8/index.html | 2 +- _site/dbn-exercises/ex_9/index.html | 2 +- _site/dbn-exercises/index.html | 2 +- .../decision-theory-exercises/ex_1/index.html | 2 +- .../ex_10/index.html | 2 +- .../ex_11/index.html | 2 +- .../ex_12/index.html | 2 +- .../ex_13/index.html | 2 +- .../ex_14/index.html | 2 +- .../ex_15/index.html | 2 +- .../ex_16/index.html | 2 +- .../ex_17/index.html | 2 +- .../ex_18/index.html | 2 +- .../ex_19/index.html | 2 +- .../decision-theory-exercises/ex_2/index.html | 2 +- .../ex_20/index.html | 2 +- .../ex_21/index.html | 2 +- .../ex_22/index.html | 2 +- .../ex_23/index.html | 2 +- .../decision-theory-exercises/ex_3/index.html | 2 +- .../decision-theory-exercises/ex_4/index.html | 2 +- .../decision-theory-exercises/ex_5/index.html | 2 +- .../decision-theory-exercises/ex_6/index.html | 2 +- .../decision-theory-exercises/ex_7/index.html | 2 +- .../decision-theory-exercises/ex_8/index.html | 2 +- .../decision-theory-exercises/ex_9/index.html | 2 +- _site/decision-theory-exercises/index.html | 2 +- _site/fol-exercises/ex_1/index.html | 2 +- _site/fol-exercises/ex_10/index.html | 2 +- _site/fol-exercises/ex_11/index.html | 2 +- _site/fol-exercises/ex_12/index.html | 2 +- _site/fol-exercises/ex_13/index.html | 2 +- _site/fol-exercises/ex_14/index.html | 2 +- _site/fol-exercises/ex_15/index.html | 2 +- _site/fol-exercises/ex_16/index.html | 2 +- _site/fol-exercises/ex_17/index.html | 2 +- _site/fol-exercises/ex_18/index.html | 2 +- _site/fol-exercises/ex_19/index.html | 2 +- _site/fol-exercises/ex_2/index.html | 2 +- _site/fol-exercises/ex_20/index.html | 2 +- _site/fol-exercises/ex_21/index.html | 2 +- _site/fol-exercises/ex_22/index.html | 2 +- _site/fol-exercises/ex_23/index.html | 2 +- _site/fol-exercises/ex_24/index.html | 2 +- _site/fol-exercises/ex_25/index.html | 2 +- _site/fol-exercises/ex_26/index.html | 2 +- _site/fol-exercises/ex_27/index.html | 2 +- _site/fol-exercises/ex_28/index.html | 2 +- _site/fol-exercises/ex_29/index.html | 2 +- _site/fol-exercises/ex_3/index.html | 2 +- _site/fol-exercises/ex_30/index.html | 2 +- _site/fol-exercises/ex_31/index.html | 2 +- _site/fol-exercises/ex_32/index.html | 2 +- _site/fol-exercises/ex_33/index.html | 2 +- _site/fol-exercises/ex_34/index.html | 2 +- _site/fol-exercises/ex_35/index.html | 2 +- _site/fol-exercises/ex_36/index.html | 2 +- _site/fol-exercises/ex_4/index.html | 2 +- _site/fol-exercises/ex_5/index.html | 2 +- _site/fol-exercises/ex_6/index.html | 2 +- _site/fol-exercises/ex_7/index.html | 2 +- _site/fol-exercises/ex_8/index.html | 2 +- _site/fol-exercises/ex_9/index.html | 2 +- _site/fol-exercises/index.html | 2 +- _site/game-playing-exercises/ex_1/index.html | 2 +- _site/game-playing-exercises/ex_10/index.html | 2 +- _site/game-playing-exercises/ex_11/index.html | 2 +- _site/game-playing-exercises/ex_12/index.html | 2 +- _site/game-playing-exercises/ex_13/index.html | 2 +- _site/game-playing-exercises/ex_14/index.html | 2 +- _site/game-playing-exercises/ex_15/index.html | 2 +- _site/game-playing-exercises/ex_16/index.html | 2 +- _site/game-playing-exercises/ex_17/index.html | 2 +- _site/game-playing-exercises/ex_18/index.html | 2 +- _site/game-playing-exercises/ex_19/index.html | 2 +- _site/game-playing-exercises/ex_2/index.html | 2 +- _site/game-playing-exercises/ex_20/index.html | 2 +- _site/game-playing-exercises/ex_21/index.html | 2 +- _site/game-playing-exercises/ex_22/index.html | 2 +- _site/game-playing-exercises/ex_23/index.html | 2 +- _site/game-playing-exercises/ex_24/index.html | 2 +- _site/game-playing-exercises/ex_25/index.html | 2 +- _site/game-playing-exercises/ex_3/index.html | 2 +- _site/game-playing-exercises/ex_4/index.html | 2 +- _site/game-playing-exercises/ex_5/index.html | 2 +- _site/game-playing-exercises/ex_6/index.html | 2 +- _site/game-playing-exercises/ex_7/index.html | 2 +- _site/game-playing-exercises/ex_8/index.html | 2 +- _site/game-playing-exercises/ex_9/index.html | 2 +- _site/game-playing-exercises/index.html | 2 +- _site/ilp-exercises/ex_1/index.html | 2 +- _site/ilp-exercises/ex_2/index.html | 2 +- _site/ilp-exercises/ex_3/index.html | 2 +- _site/ilp-exercises/ex_4/index.html | 2 +- _site/ilp-exercises/ex_5/index.html | 2 +- _site/ilp-exercises/ex_6/index.html | 2 +- _site/ilp-exercises/ex_7/index.html | 2 +- _site/ilp-exercises/ex_8/index.html | 2 +- _site/ilp-exercises/index.html | 2 +- _site/index.html | 2 +- _site/intro-exercises/ex_1/index.html | 2 +- _site/intro-exercises/ex_10/index.html | 2 +- _site/intro-exercises/ex_11/index.html | 2 +- _site/intro-exercises/ex_12/index.html | 2 +- _site/intro-exercises/ex_13/index.html | 2 +- _site/intro-exercises/ex_14/index.html | 2 +- _site/intro-exercises/ex_15/index.html | 2 +- _site/intro-exercises/ex_16/index.html | 2 +- _site/intro-exercises/ex_17/index.html | 2 +- _site/intro-exercises/ex_18/index.html | 2 +- _site/intro-exercises/ex_19/index.html | 2 +- _site/intro-exercises/ex_2/index.html | 2 +- _site/intro-exercises/ex_20/index.html | 2 +- _site/intro-exercises/ex_3/index.html | 2 +- _site/intro-exercises/ex_4/index.html | 2 +- _site/intro-exercises/ex_5/index.html | 2 +- _site/intro-exercises/ex_6/index.html | 2 +- _site/intro-exercises/ex_7/index.html | 2 +- _site/intro-exercises/ex_8/index.html | 2 +- _site/intro-exercises/ex_9/index.html | 2 +- _site/intro-exercises/index.html | 2 +- .../knowledge-logic-exercises/ex_1/index.html | 2 +- .../ex_10/index.html | 2 +- .../ex_11/index.html | 2 +- .../ex_12/index.html | 2 +- .../ex_13/index.html | 2 +- .../ex_14/index.html | 2 +- .../ex_15/index.html | 2 +- .../ex_16/index.html | 2 +- .../ex_17/index.html | 2 +- .../ex_18/index.html | 2 +- .../ex_19/index.html | 2 +- .../knowledge-logic-exercises/ex_2/index.html | 2 +- .../ex_20/index.html | 2 +- .../ex_21/index.html | 2 +- .../ex_22/index.html | 2 +- .../ex_23/index.html | 2 +- .../ex_24/index.html | 2 +- .../ex_25/index.html | 2 +- .../ex_26/index.html | 2 +- .../ex_27/index.html | 2 +- .../ex_28/index.html | 2 +- .../ex_29/index.html | 2 +- .../knowledge-logic-exercises/ex_3/index.html | 2 +- .../ex_30/index.html | 2 +- .../ex_31/index.html | 2 +- .../ex_32/index.html | 2 +- .../ex_33/index.html | 2 +- .../ex_34/index.html | 2 +- .../ex_35/index.html | 2 +- .../knowledge-logic-exercises/ex_4/index.html | 2 +- .../knowledge-logic-exercises/ex_5/index.html | 2 +- .../knowledge-logic-exercises/ex_6/index.html | 2 +- .../knowledge-logic-exercises/ex_7/index.html | 2 +- .../knowledge-logic-exercises/ex_8/index.html | 2 +- .../knowledge-logic-exercises/ex_9/index.html | 2 +- _site/knowledge-logic-exercises/index.html | 2 +- _site/kr-exercises/ex_1/index.html | 2 +- _site/kr-exercises/ex_10/index.html | 2 +- _site/kr-exercises/ex_11/index.html | 2 +- _site/kr-exercises/ex_12/index.html | 2 +- _site/kr-exercises/ex_13/index.html | 2 +- _site/kr-exercises/ex_14/index.html | 2 +- _site/kr-exercises/ex_15/index.html | 2 +- _site/kr-exercises/ex_16/index.html | 2 +- _site/kr-exercises/ex_17/index.html | 2 +- _site/kr-exercises/ex_18/index.html | 2 +- _site/kr-exercises/ex_19/index.html | 2 +- _site/kr-exercises/ex_2/index.html | 2 +- _site/kr-exercises/ex_20/index.html | 2 +- _site/kr-exercises/ex_21/index.html | 2 +- _site/kr-exercises/ex_22/index.html | 2 +- _site/kr-exercises/ex_23/index.html | 2 +- _site/kr-exercises/ex_24/index.html | 2 +- _site/kr-exercises/ex_25/index.html | 2 +- _site/kr-exercises/ex_26/index.html | 2 +- _site/kr-exercises/ex_27/index.html | 2 +- _site/kr-exercises/ex_28/index.html | 2 +- _site/kr-exercises/ex_29/index.html | 2 +- _site/kr-exercises/ex_3/index.html | 2 +- _site/kr-exercises/ex_30/index.html | 2 +- _site/kr-exercises/ex_4/index.html | 2 +- _site/kr-exercises/ex_5/index.html | 2 +- _site/kr-exercises/ex_6/index.html | 2 +- _site/kr-exercises/ex_7/index.html | 2 +- _site/kr-exercises/ex_8/index.html | 2 +- _site/kr-exercises/ex_9/index.html | 2 +- _site/kr-exercises/index.html | 2 +- .../ex_1/index.html | 2 +- .../ex_10/index.html | 2 +- .../ex_11/index.html | 2 +- .../ex_12/index.html | 2 +- .../ex_13/index.html | 2 +- .../ex_14/index.html | 2 +- .../ex_15/index.html | 2 +- .../ex_16/index.html | 2 +- .../ex_17/index.html | 2 +- .../ex_18/index.html | 2 +- .../ex_19/index.html | 2 +- .../ex_2/index.html | 2 +- .../ex_20/index.html | 2 +- .../ex_21/index.html | 2 +- .../ex_22/index.html | 2 +- .../ex_23/index.html | 2 +- .../ex_24/index.html | 2 +- .../ex_25/index.html | 2 +- .../ex_26/index.html | 2 +- .../ex_27/index.html | 2 +- .../ex_28/index.html | 2 +- .../ex_29/index.html | 2 +- .../ex_3/index.html | 2 +- .../ex_30/index.html | 2 +- .../ex_31/index.html | 2 +- .../ex_4/index.html | 2 +- .../ex_5/index.html | 2 +- .../ex_6/index.html | 2 +- .../ex_7/index.html | 2 +- .../ex_8/index.html | 2 +- .../ex_9/index.html | 2 +- _site/logical-inference-exercises/index.html | 2 +- .../exercises/ex_1/question.md | 2 +- .../exercises/ex_20/question.md | 2 +- .../exercises/ex_23/question.md | 4 +- .../exercises/ex_24/question.md | 14 ++-- .../exercises/ex_27/question.md | 2 +- .../exercises/ex_3/question.md | 10 +-- .../exercises/ex_8/question.md | 4 +- .../exercises/ex_9/question.md | 4 +- .../exercises/ex_11/question.md | 2 +- .../exercises/ex_14/question.md | 2 +- .../exercises/ex_15/question.md | 2 +- .../exercises/ex_18/question.md | 4 +- .../exercises/ex_21/question.md | 2 +- .../exercises/ex_23/question.md | 6 +- .../exercises/ex_3/question.md | 14 ++-- _site/markdown/Future Exercises/index.html | 2 +- .../ex_1/index.html | 2 +- .../ex_10/index.html | 2 +- .../ex_11/index.html | 2 +- .../ex_2/index.html | 2 +- .../ex_3/index.html | 2 +- .../ex_4/index.html | 2 +- .../ex_5/index.html | 2 +- .../ex_6/index.html | 2 +- .../ex_7/index.html | 2 +- .../ex_8/index.html | 2 +- .../ex_9/index.html | 2 +- _site/nlp-communicating-exercises/index.html | 2 +- _site/nlp-english-exercises/ex_1/index.html | 2 +- _site/nlp-english-exercises/ex_10/index.html | 2 +- _site/nlp-english-exercises/ex_11/index.html | 2 +- _site/nlp-english-exercises/ex_12/index.html | 2 +- _site/nlp-english-exercises/ex_13/index.html | 2 +- _site/nlp-english-exercises/ex_14/index.html | 2 +- _site/nlp-english-exercises/ex_15/index.html | 2 +- _site/nlp-english-exercises/ex_16/index.html | 2 +- _site/nlp-english-exercises/ex_17/index.html | 2 +- _site/nlp-english-exercises/ex_18/index.html | 2 +- _site/nlp-english-exercises/ex_19/index.html | 2 +- _site/nlp-english-exercises/ex_2/index.html | 2 +- _site/nlp-english-exercises/ex_20/index.html | 2 +- _site/nlp-english-exercises/ex_21/index.html | 2 +- _site/nlp-english-exercises/ex_22/index.html | 2 +- _site/nlp-english-exercises/ex_3/index.html | 2 +- _site/nlp-english-exercises/ex_4/index.html | 2 +- _site/nlp-english-exercises/ex_5/index.html | 2 +- _site/nlp-english-exercises/ex_6/index.html | 2 +- _site/nlp-english-exercises/ex_7/index.html | 2 +- _site/nlp-english-exercises/ex_8/index.html | 2 +- _site/nlp-english-exercises/ex_9/index.html | 2 +- _site/nlp-english-exercises/index.html | 2 +- _site/perception-exercises/ex_1/index.html | 2 +- _site/perception-exercises/ex_2/index.html | 2 +- _site/perception-exercises/ex_3/index.html | 2 +- _site/perception-exercises/ex_4/index.html | 2 +- _site/perception-exercises/ex_5/index.html | 2 +- _site/perception-exercises/ex_6/index.html | 2 +- _site/perception-exercises/ex_7/index.html | 2 +- _site/perception-exercises/ex_8/index.html | 2 +- _site/perception-exercises/index.html | 2 +- _site/philosophy-exercises/ex_1/index.html | 2 +- _site/philosophy-exercises/ex_10/index.html | 2 +- _site/philosophy-exercises/ex_11/index.html | 2 +- _site/philosophy-exercises/ex_12/index.html | 2 +- _site/philosophy-exercises/ex_2/index.html | 2 +- _site/philosophy-exercises/ex_3/index.html | 2 +- _site/philosophy-exercises/ex_4/index.html | 2 +- _site/philosophy-exercises/ex_5/index.html | 2 +- _site/philosophy-exercises/ex_6/index.html | 2 +- _site/philosophy-exercises/ex_7/index.html | 2 +- _site/philosophy-exercises/ex_8/index.html | 2 +- _site/philosophy-exercises/ex_9/index.html | 2 +- _site/philosophy-exercises/index.html | 2 +- _site/planning-exercises/ex_1/index.html | 2 +- _site/planning-exercises/ex_10/index.html | 2 +- _site/planning-exercises/ex_11/index.html | 2 +- _site/planning-exercises/ex_12/index.html | 2 +- _site/planning-exercises/ex_13/index.html | 2 +- _site/planning-exercises/ex_14/index.html | 2 +- _site/planning-exercises/ex_15/index.html | 2 +- _site/planning-exercises/ex_16/index.html | 2 +- _site/planning-exercises/ex_17/index.html | 2 +- _site/planning-exercises/ex_18/index.html | 2 +- _site/planning-exercises/ex_2/index.html | 2 +- _site/planning-exercises/ex_3/index.html | 2 +- _site/planning-exercises/ex_4/index.html | 2 +- _site/planning-exercises/ex_5/index.html | 2 +- _site/planning-exercises/ex_6/index.html | 2 +- _site/planning-exercises/ex_7/index.html | 2 +- _site/planning-exercises/ex_8/index.html | 2 +- _site/planning-exercises/ex_9/index.html | 2 +- _site/planning-exercises/index.html | 2 +- _site/probability-exercises/ex_1/index.html | 6 +- _site/probability-exercises/ex_10/index.html | 2 +- _site/probability-exercises/ex_11/index.html | 2 +- _site/probability-exercises/ex_12/index.html | 2 +- _site/probability-exercises/ex_13/index.html | 2 +- _site/probability-exercises/ex_14/index.html | 2 +- _site/probability-exercises/ex_15/index.html | 2 +- _site/probability-exercises/ex_16/index.html | 2 +- _site/probability-exercises/ex_17/index.html | 2 +- _site/probability-exercises/ex_18/index.html | 2 +- _site/probability-exercises/ex_19/index.html | 2 +- _site/probability-exercises/ex_2/index.html | 2 +- _site/probability-exercises/ex_20/index.html | 6 +- _site/probability-exercises/ex_21/index.html | 2 +- _site/probability-exercises/ex_22/index.html | 2 +- _site/probability-exercises/ex_23/index.html | 10 +-- _site/probability-exercises/ex_24/index.html | 30 +++---- _site/probability-exercises/ex_25/index.html | 2 +- _site/probability-exercises/ex_26/index.html | 2 +- _site/probability-exercises/ex_27/index.html | 6 +- _site/probability-exercises/ex_28/index.html | 2 +- _site/probability-exercises/ex_29/index.html | 2 +- _site/probability-exercises/ex_3/index.html | 22 ++--- _site/probability-exercises/ex_30/index.html | 2 +- _site/probability-exercises/ex_31/index.html | 2 +- _site/probability-exercises/ex_4/index.html | 2 +- _site/probability-exercises/ex_5/index.html | 2 +- _site/probability-exercises/ex_6/index.html | 2 +- _site/probability-exercises/ex_7/index.html | 2 +- _site/probability-exercises/ex_8/index.html | 10 +-- _site/probability-exercises/ex_9/index.html | 10 +-- _site/probability-exercises/index.html | 44 +++++----- _site/question_bank/index.html | 76 +++++++++--------- .../ex_1/index.html | 2 +- .../ex_10/index.html | 2 +- .../ex_11/index.html | 2 +- .../ex_12/index.html | 2 +- .../ex_13/index.html | 2 +- .../ex_2/index.html | 2 +- .../ex_3/index.html | 2 +- .../ex_4/index.html | 2 +- .../ex_5/index.html | 2 +- .../ex_6/index.html | 2 +- .../ex_7/index.html | 2 +- .../ex_8/index.html | 2 +- .../ex_9/index.html | 2 +- .../index.html | 2 +- _site/robotics-exercises/ex_1/index.html | 2 +- _site/robotics-exercises/ex_10/index.html | 2 +- _site/robotics-exercises/ex_11/index.html | 2 +- _site/robotics-exercises/ex_12/index.html | 2 +- _site/robotics-exercises/ex_2/index.html | 2 +- _site/robotics-exercises/ex_3/index.html | 2 +- _site/robotics-exercises/ex_4/index.html | 2 +- _site/robotics-exercises/ex_5/index.html | 2 +- _site/robotics-exercises/ex_6/index.html | 2 +- _site/robotics-exercises/ex_7/index.html | 2 +- _site/robotics-exercises/ex_8/index.html | 2 +- _site/robotics-exercises/ex_9/index.html | 2 +- _site/robotics-exercises/index.html | 2 +- _site/search-exercises/ex_1/index.html | 2 +- _site/search-exercises/ex_10/index.html | 2 +- _site/search-exercises/ex_12/index.html | 2 +- _site/search-exercises/ex_13/index.html | 2 +- _site/search-exercises/ex_14/index.html | 2 +- _site/search-exercises/ex_15/index.html | 2 +- _site/search-exercises/ex_16/index.html | 2 +- _site/search-exercises/ex_17/index.html | 2 +- _site/search-exercises/ex_18/index.html | 2 +- _site/search-exercises/ex_19/index.html | 2 +- _site/search-exercises/ex_2/index.html | 2 +- _site/search-exercises/ex_20/index.html | 2 +- _site/search-exercises/ex_21/index.html | 2 +- _site/search-exercises/ex_22/index.html | 2 +- _site/search-exercises/ex_23/index.html | 2 +- _site/search-exercises/ex_24/index.html | 2 +- _site/search-exercises/ex_25/index.html | 2 +- _site/search-exercises/ex_26/index.html | 2 +- _site/search-exercises/ex_27/index.html | 2 +- _site/search-exercises/ex_28/index.html | 2 +- _site/search-exercises/ex_29/index.html | 2 +- _site/search-exercises/ex_3/index.html | 2 +- _site/search-exercises/ex_30/index.html | 2 +- _site/search-exercises/ex_31/index.html | 2 +- _site/search-exercises/ex_32/index.html | 2 +- _site/search-exercises/ex_33/index.html | 2 +- _site/search-exercises/ex_34/index.html | 2 +- _site/search-exercises/ex_35/index.html | 2 +- _site/search-exercises/ex_36/index.html | 2 +- _site/search-exercises/ex_37/index.html | 2 +- _site/search-exercises/ex_38/index.html | 2 +- _site/search-exercises/ex_39/index.html | 2 +- _site/search-exercises/ex_4/index.html | 2 +- _site/search-exercises/ex_40/index.html | 2 +- _site/search-exercises/ex_5/index.html | 2 +- _site/search-exercises/ex_6/index.html | 2 +- _site/search-exercises/ex_7/index.html | 2 +- _site/search-exercises/ex_8/index.html | 2 +- _site/search-exercises/ex_9/index.html | 2 +- _site/search-exercises/index.html | 2 +- _site/search/index.html | 2 +- _site/search_data.json | 74 ++++++++--------- .../exercises/ex_1/question.md | 2 +- .../exercises/ex_20/question.md | 2 +- .../exercises/ex_23/question.md | 4 +- .../exercises/ex_24/question.md | 14 ++-- .../exercises/ex_27/question.md | 2 +- .../exercises/ex_3/question.md | 10 +-- .../exercises/ex_8/question.md | 4 +- .../exercises/ex_9/question.md | 4 +- .../exercises/ex_11/question.md | 2 +- .../exercises/ex_14/question.md | 2 +- .../exercises/ex_15/question.md | 2 +- .../exercises/ex_18/question.md | 4 +- .../exercises/ex_21/question.md | 2 +- .../exercises/ex_23/question.md | 6 +- .../exercises/ex_3/question.md | 14 ++-- 621 files changed, 849 insertions(+), 849 deletions(-) diff --git a/.jekyll-metadata b/.jekyll-metadata index 9dff658dc0d5168cd0caa278f2db763cff81b608..70bb8f710d6d6bebc7e0769d921addaae4f03d83 100644 GIT binary patch literal 3337217 zcmeFaf3)1$Q6H$0W6dunHed_}#te{ z{cf>xrQKT}cJiHKu<}STyfW%37Y`je_Tk&#aCiUmuAAaim z8}59*-zyrGI~o@+nZJFuQMt2KY!1u+;nR0CDvg;nS#9^4ovoJ9X`{$n*Oc1*-SZAV zHEg$vwR})J@N;jP{pXA+`&*;oN~^e>Z*@ioe$3lvj+_4EksD?|d3$N_lg6r?u@5yW zUM=LLiAUo)=`njCO%kzuAvr+ku`SG=xuNM;FYE;h6uGD@wUXP&zip}wB zVzE+S}^34?p*k>;Kj&dhH``xM#Sv)@?Tj{mSj% z_pXD9&V>8pGwXUI(RK6D#xwc$>Tqjwvp*PJOQYUWis!s=>Sf+PmT}O2tw9}h(C`K% zr)D<%RCHJhcO?1#P^5zJ%VDQh+%nn0k92bL<(b{w%?DRn{j0r|YS!qD2L0AnbJXtl zR&Er%#rA5ocAYF-Gya!&kBUa6E%wdNZWY7Q7}pD(MrB=0RNfn2Ee897=18!6d2P>e z`^2tyc~O!i%zI981#-NTjy^Htq=*y=RhN*2sw*%Os;=BbLZ^0?lQ@XQonk?XIL4gi zq+=nJ)jWmkJpwhuhSVpMXuT>W=~}Q3SDO89cdOSP?c5Z-jmnm^;H_aXxUR&}o&4HD zRy%2+{O%Nwfgt0ga}cySX)INx-0z9@Vp=NQkE>GdPgP2}ASV`giUp}EWz1PlI!eXs zgG(5%58~qW!4$7$yb_B$#e!74jycOoN1;{4LyE&ysJeWJMyn7RJ``85qU`My2N+j5 zsVEex$CocukHbRscnVdMSBbry;y@@=$6V#4qEM)6E`o(vo>XLwije=us;VXJkr8`4 z#R0}uPAUq8s;{m1h}!BknpK5*q_yi=72a0AAvm?wugzMU>`wgd6pw)*%efimVQCl!T4^~CZm`%lmUMacgr(z4$a z2E^V@ae#4^Mir~a6vwL6Wau#(tWx_a$Kr}sn8lr90pu(v9fc^RsyVkxp@)#Y+#G&n z*%fASr&y4R*D+@~=_nPinzz8D&_niQF=sjHC>5{uC9H?* zxO%vr>fv%2j9A<$7Np{J%vnx43dL(}3D>XYo5>*)AqW^zHytWj()>y-?i33kXF2I8 z6tC3>mN36Q0E<^+-W0DUzY>c(#ez`0jyuaqN2z$#JZ&wtnqtPSr63U5yfw_?PO$)T zmXnS`Td$hatI(7L;d-$tiO7;G%HB?KAe6VpT;-&qP^kLqG(|MVN!lKbiqK5nUh4ur z#(6_<8spp_<0O7}ipM~Zanf06jPurLbJF-N-fj=Mc8}o1i{r{WCg1nw-#>Tzt*K0G0IAF6)_h%@9~)xL zhbP4DBbHi2En@4-S{0ZNPl(+|EVYJO#MXT6=SRn}q~*KRy#itlwTP`QYgJ%AJaOzk z$5LxJA@&);wxnjjc16ECGQ2{K;RIL7?E~{h!<5+<%cgN`B>{b$Vwp<NHPj;3ysNRK97{$jmYQRUHJlJDS;T6dDUht&$Xt$gyh4qk7OuW7 zFp{VUBt1l?3zr$L#28L+m0WYxmbD%*AD$}M{iYVRh7)2X>jBMxl_c2c@Cr4CTDWTN z5J-B6EQy!79w5e03s=o8SV_1>H>w3Sh7(*x%U8{K6>T|22Ux5%oDeHnzG@!SlH`== z=Cq*3aDuBO!PYg;5J*yr`S8>q*zXTeYv@JnvJMv3$ta!nC6{RY5^JbMthYynB=!s> zePlj7aqK?FQfuf%tmZ0K6tU5ju~=*9MXct1Sy3O6y~t&*3aZ2!dJ(&Yr zVH90=i`*Bv%vFKsH-;qlylTE)1&Ix@=ED=m-r!hb4ZR$z`H+lgcPqM6GFWS_I3ZSY)KGI7D>+?9>Nf4`E~qio!d3Hr zm!yZt2G26Xl^DYbu21{DWXjL()=`1S4J7eN24Nc&NpYk;mK8?;Co7Ir53=IODG^z5 zWI9S#97*)D;>ZbHS#hMb(Wpo+OzHTcusnuWM8A(A7EyT&vxv%Ls6|vB!!4rn7;+Jn z$FPg2JceFG<$?Kyj)lnb3%$KvSRR;P%mvzGjYQ$^1M^D_%rEps6=8dV`GsEbBK&<| zexXA=vfoFK#0kp-^9wyrCj5P1exdKt2!9``UrrtV>BEhR=0__B^Y>4xC$jVTm3D7E zyV&k-c8ctg!t~JF{oe5AHzQTQz~}M|z5Vx(rE%@;zaU0&Y6Ajbtoq3)fX8UHonX95 zjiMgLnm1t(FeZzt1mjh@($o%P)lc;RU`*erNibfeMsb4ivw+%I@p-xt;`1Y`j6>MO z3B!QGSoNbm2r4FR%tXbjo}c&uz*zN5KL8lV_ux+uqo`+N&9|omJZ7FhimavxViYGB z0}5l!$DII0AlX^dUQ&dxiFOQYK28Gg7uoGg7+xhd(T-uw)dT>;(G?RRY@!{*nh&A^ z{6(fp5)ZEun`p=&R7Q!alF|7GYVSvBrbl8sW*{~2cal$a5ORV}PcSI*C zx@!f47)3pdHFGh-W6_au5TmGvvE~*V!egYLqP@!bcHfH4_V)ed9T zx5EQGM$f+{HeRJh(GFwP*BJv~Oz-hcFkYobQ4eFyBNPZ2M>j(e#3)WM1`GmezTyzj zASG7_Xy@V(Hqnk@&C5H0zsN!+VR)6;#0kS|B1d)nT!#4d0+%5IK+D`+29QeT5*ihN zR02kbRMNYINF|d55UFIl0V0(Y4-lzjW)32iRQrtzqMjz(p%}3YJ0ZT7p(lh`hMy2( z8G=HHWf%$}mZ2zwScanzVi}S`hz-M1v!V|%-{shCw+DO{ES3=nZQVd3>ukzMj1h5 zjnSyc2qM6e5=7c=Qi4bYEG39kGg5*`W|9&_+AjLHY@TzVvRd|7KH3`Oo$OrRTi?pp zi|oRn-z{SPEsA81SrlXCCA?vSvtlE*}&0d~GVI-q5_655hBDUn#%xER+^^U4ABZi2=`=M+QK6CauGS=UqG#yTpKJ%~J*-Jd+M}!t*YkiCvuVjA}t2 zQ{1paby~^!LhXnSW*8?(gO-GvXKg?fP4@P*13HvlV!*TJ@f?uf$UH&9^KQvO?Bayy z%VPUuekOXpLSUi?1Yl;w2PBzv4H^}YWCBi@WYXb>NhWJrm}Ij0fJr9B223*Pe8VJ@ zJ<&!bSWS)-O;}9-B-h6LIqiY$Tz|dYG=p5H*Nfh$Ot|JOetTH#rrw(7$$%?@Ume-B z7SI$<#g`aRtD3iz#Jpp^Isx?>Q&Cgs1ZqulZ(R(i=BpD>ucuUE3Y|c$Yu?lp1FHGz z1k`IlrKZpcRC70&pU%tO0-YBzpwb)Bw08@Ercer0%{^{8?~o4J@&c8ZLMKqGOUbC_ zt5fQ~zL+AWa005s@!MOsr!?dA8L93R@r`yq#aY4$rE(@JWui%w0U6R5SN zY@(X4PI>=&Z;F^gCs3=J4@8Nrq|8?*pk6Ofi7A|bDpsIs-aZv8%gETk(iW)15;{?; z`M9(gO3C=t(xQ}D!U?5f%c$B?CVlhODd}IYPKha;fGU>sHRDu_PKj@pmQIN!bfQ%A zT3-yMWPQ7|C?%HAiBiq2Q!$iA*Ez*m!U?5vrKx6^iuI#v(cu(m3MZh7m8P19PsK=t z3|ua4(kGU1La7*?)-~_zi*bng>a^0ow$i7jPzuzgoan2QF*falDi^cF6gq)g)qDYt zn0L%qCq})#VMUU!8z@y*))tp%bXJrL0lSS0|ueXH;Se zoj|Q>zRgx{F-7i_TizO#m_jE|YfA}K^VNw_uQMt!g-)PWHJ?0@V^nhBVtE;rm_jE| z>ze0K#eizQIx*_?#S}4xQlKv7VyaG75KGIb)D%uY6`S$XT%wB2_>rlf7m!80GC= zd$beLb88N7wa;RJ$o9p;SjDMZhCs6F_t+3VqwkSv@AZbUih3lgey|PUGg^TsByX=$ ztLR6v=4&+|NG9#dgk*zcVio;J*1UTRK{9DBB_tap6RW64vg)U{5J-;i)vYjAaYFI{ z(7VuFTx>zUKeEn65TiK37}B`Ve3d7JddTihDlN8GJwM@vK(gxBybwr^?_sYnR?*MN znqO2v_$3CV~eS@Wq(L_J8R5VY48L5!jv#+r}9Av{KAB@&FcSBX)aU<|2{ zm%Xl_BWT)zjHoN9Rh*EFs4FxB89~To7+X7zgBV3Uj5Xf{f$$iaE=@2tyL#hSWvGX- z<`yyn#?f_XEGaJt~CH=rs)q$!5yJ@5Sl}PB!lq`MELZZh;$vAV?;8G$Gj_ znOH?XCu`o-gCLpgU?n6QBonLXNAj{C4~g&fD@c6DcRWP(n^*{+(aVSuCz~yi`08ZU zk7gl|OfSVrNZwwhR#9JFsD4!ofn<8wSVFSl#`T8+U|$;HB66!)qk<4irqwZGNt9y5lAZ@fESby2h$Vaa7_p>~ zz=$PhsxV?n`QNBOmK|hE1tgc@DaiLSLQE}YPNYQK$+x_0nFU_capTlLHZvXv}?J@*0ic?z)0Ao{)_~~*< zoW1~fj8(xNN)?6gM+^WkR{e$n0LJk> zVGzV9>e+Zv?`O5GsL1AAV&hf%sDl%X0fn*V*9!neAlW(54&o3t(T-uwmz)6nMRt`F zhCM%A0N`QG)dT>;(G?RRY@(fqHDC1(@E4h8N<8fOqIUofYwocDFdSWrErd;+co@-P z(+pxnGbuV02Qi8hi~&@v`4(0H?T}&rR8Dxl-yJ~3sxQR`unxWbSv!h@7{!T-5ir)w z#R!i@N5(;nq8`SYJH-f(Mb|10Viff-Uh>i6_^!f6z?jTiYlpGwi?9J6qsM(yZS4CR zY(Qdz40+33;2$4!w%Mht#A_gLr?3zKOlHvg(l}s>0q>^gi?Lt@EvtP``s)#b4VJC!G zhMo{&8Gb^DWe5r(mSHG_ScakyVi}G?h-F9$AvO$4(ft(&Int}DFk-{76nB>@08+HS zjxjP}sZ;82p4^^)l~g;Hoh$M|ue^Km!l1vFueCev;i%or&i7kIXE=8Mq~=R@UI1Dl zvb%Nyn#O7Qe--9Z#XDoqVlb9H`cO_#TF8 z+9%u5{XR=L%fx|d%?ChXz9XAOiL0-3HL;8ns&OsrV~SgQxV|je&ex9XfTnQ*Hf+JD zd43p1*JP7jJFLT5CJt0=4ZzED+FeYVK8Q9`Czh1 zXQ5Gn$tEBL$|m~)K-r`t0F+HuCqUVx2m#6_ePN(%va{N#1gzST!U?nKpAma%{#*8S=vDy z!X{2s3|QQ0J{Jh^7uoqw81{S?5P)IL)dT>;(G?RRY@(fqHSa$I{6!|u5)UhSBigba z*4$$QU^u!KTL_zI=V8s~`2hYR6QPNR6+NzN$-@i#xUT*8r_Bzs?@SIeI#DrV&rUNJ zBf7-Vk#P{CsE4uU78}B2(e>DZ7)3pdmwYC3xgOWGt+dFXs&*JJ?BlwY9;27PYhU33 z*q9ncJ&c#UsG#@UCm63%qiBcm!alBR$;R{@*#zTNY83S_);tLW$e-qM;nZ*=4L4#G zCl~|fC^TP=1=#x_H&STl;t)2`j$zGR3V^@JdOTrR(c`+4t3p5H@RbEQVFx+aM@Ehy zQb~Y9q>>gqL@F7Gfk-90W)P{Qcz{SHqn{9|q_A~+V{kGn7WJ%SHw>c^L zjAWwlbgza)9Ct;(dkjjAiYQ;k6~$1?JH-ajX?peab&OIXh(#+qrW+&5>i5SL##q}s z#Rt-DPKrt`B7mM9RZ^d6-zykQ>&-9>)|AVSag$vnIIS4sW=u_7-Mbk6dy>pIVmbh zI91KvX;CndCTXK0iipVi(^%U(#Rt-DPKt^KvaeIi(Pd=jxREF#>M0}d71s1ZJE!p)H8#E;1ShCT=oGg>%%Ln3$VyNYvVgu+jCp{%&SaU9xL=uyVCE*ZRCWczx zDK>ykbJFu^zvlt`+@SI*fs=@m&>?4N8Wl-#WK&dD979QxU&nM6B*iiNf|BA$BVE>a zOi3vDb!6m5_Uo7#XVIn$)43Lv$MA}%Jcd|A2u1=%b|L{LJ5irF#S@AI#@*?pG>Qajny1-NBw&mXj0BAAL;?&6 zP@g!(6N&`J-RYz>iUjJK$I?+GV2luq1dQxN0t^XIpE$)6iUh{p>7+D@1kClBe)bO} zHtta*z}+E=LXDA~NPr;$>Jz7U3gu2GrBNhMPs@SrRbzy3l|Wh!upMja6Q_7Wk$_SA zCQf%!8b$(XcfgPUv%LvJO=R~up{g_L6Q_6zQbw{nUvB=q25a3JNtwb|L|$CqRAT6i=bt>7?{?pjj2o9Iykk z0qF?9I5;i_2<8!|SPJCKICM_CgXZPMfV+c6b}9n6B7peBDV|_Oz?eIoltvL(Ee#89 zuNoteG601dBRi1*Lju$%PVp4VolZ)lNT9Cy8fjGj$rvFR2^iUl1Q-&aK5>dC6bX#G z(@AL*2~-manB_n~5-_q82{0r;ec}{PC=wWVr<2ks5;%S=Z49g$BLu4ijO;`L3<*%5 zIK>l+1jgOzq%>+{K=YUostwId386SRvLT^h9&w7LK+bg185RGUuiizWjk(kU1)0b$ zNMSx}wK&2dp0#JD) zG6o3d5vN!RQSq;NE&zo#W-I^&naHLBf_cO#mI67`NoQ32YaR|np^eO;H7Y2` zjLnj1KO_{)BTlgt$eB^-T+>VdsF@&RgkU>?Ms{ifa7_T>6Q_8BH37!l>7+EO2+(|M z2rA_;mxiG7#@HnjsR$^B=DzDy;t{7<3gpZvbgpYYBZw*jj1htr0Y-Kz0+=Fz`ot-o zP({GFJDrq95m!Ck7-FssVqVpNp(foJ!rvW6eBu;Oq1@@DG>il^KmQ4<1ek}PV5o`g zVbE|saf+u_a$i>xge&L>Xs1S5elcRDEzBLU541Ym$;4hg_e6WJ+&a6WO0r%>*6QW{1AsWE7l z1EdKBtOQbHkZlZ7pE$)6j0DEq>7+D@1Zt@#U=9lf>7+D@1Zs%{OeGMI1dQxN0t^XI zpE$)6iUh{p>7+D@1ge^^5rl0=G4}|fP-7gBNPr;$W0$e>98U2R%AHP1qe!5x`5F-v z2^b@c0S>oCXk;f6U`T-a#3`OoBrxtyC#7K|kQ#$#cOYP6(8x|Cz>xs)iBmkmNMOvJ zPD-P$642ZdKwTxkbOcZw99cskm`9vqDUdUrbVkL$=2<}$+L%#66lBIm%hFXM*RB#F z9&w7LK+bg1d0p$uwPH9K1^At=#I&jBTx1QpKUq@c%%CfAbTL^FrcP`+kLT9jz!c5akt(m`%i zP|ZYo)D#U=rm3_H}q7P5_g-RyzgD1DKdZIe>l5{Kg4jvT2h5OiZF2z`h>9juXJ7 ztC0XqOrjjXnp+X_`I5As6M%_Hlmpn;3#4%}FzMwd027la2e7Z3^5XEwvORzhN_K@Cl@Lonh*XA?0N=`x5<~_UD#-M z`osR_M!T7Jva|iIUMnB9`@LaA^R*7wKh$QiKxD&Rfz0AmP-4KWdC(T)HNxrIm-d0o z;sj=w%H5uq6i?hb+;KGnF^^yJsvXKbh?e&!%wDKG~KaT3|0Zd4$0Nk;%A zmxL-vF6l~xy*!RB@vOGmM4#VnSK^rxqO+GSyo4 zre6PQr)aGg=5o-@K`}HJgYHf(-aNEuuAlakLnV%HIpnfg3=rAOVi>D9m465%tG?bA z;WJ}^gk&?8eKWyPtEfk^d4^%hIhoX-3*20?&s4-J`jM=8iX6eo^!1*EWHV4eRss5v zyr6ATd-fr{mxN@4WMUQdNH+JSFL!l8FN{w}-d?3vaYFK7{)WFbC20oo7UcUQTki!i ziW7_>jf({xRf#wz-eta&d4;vAUVFF`r5MB$;JR_UymFuwPi=L zF+f5xK{B<9dL&mh57t9g>CsKrhp~zik`c2*nkgBvRUI882Qi9z7;Ek&BRoc$rrKM{ zL5$)Aen(}09s*;0<^c90-DCD z;RFM=>c=x+-lK0tBw!DVq2a()YMMBJt@wa>u{Ed1J#<3=fiwQriT)$uah;gj1#J1JL#%l z_<$85bZSaFt^=CJ3D~eAMDv3mFuEoMq;^<`vrHVQ*8Khl%y(qHpHRKm3K<`Ejsw-2 zTh}mDN7uOyXBj6{<4SAIu*TJo(cwCvX`Fx!E3GwOO%Ed_a;_*9A=ir#Vi_k?!|1xM zc`gLTNyZB4und2iikc=4V4F8#{9M^`x4@MxFkr{`GEP9#L;`Hhg$T}jB;RVUL;{*7 z5@2iIYs7hvY^fz+?@fjf(?kO7bXcZ5oJvlFBw+6Wo0ujJV5@#m2gca+@}LCly`33y z4_q9;R{g#X4A{&qGdqn+W9Gv;tL*&U)+c|Rb(Ah@iyu(io+pS_P zAJq1L?$pZ~m4K#+1K6sc+kttHzAlp(dylbWt+6#_~yF`fVN@V@G#lCZK7Y7#lVyqM5K^gCc~KwDWa1%fx|d%{^?G??|6E zq566iLM-Ei>W!HjL*0CErPaUMTbX(*j-RQJ*9lC8z+g|(*VjQMF>zfIfMq3P1Y2^&N!VoLkee@4Yyuls0}Ba;WnIbhTL$%8Fs@7 zXXp(loZ&Z|aE9P;!b36~-R}o79QR@n%#E_=_w~wRew0*u)9LoAN}Y_*9{puk3e=l`HMu`mmGl6oZvV zis6+}e{iEv#c4y@=b~85FEfH$^opoj*$0ObY(;UlQ;o5 zml!ehb0Yt11Wx43yNApKH!AW%$zB&iD2YvkP_hn02qil)2%)5FgAhsz282*DYJd<* zhVB{_z>Fsf3vp`Ve9D!R(wlG}tpI zKeDrfem8q`JMV6G%n6ShXFbYWG&K)SJAlIe^Oxrk;oLTHS|2#O{IDko<01RkK zXg-Z4Ur5jkBoYZzn<&Syx!iN9n-ZjnlrT*1dsU9%s^)tM06a{##uJ7OB*w}sCk)FM z!?3Cnwn%o=O2Ap)4h3BZ6dSTlkF zC0}$94q+1~49l0nng=cA87I0q%K#>E0S1y-gPzMiEiqhM?t0jK{@7Gu;$dnN<+X%)=6;!Z*nED%@Yq!n)F#R? zTwCsX*nED%@EC@vO_XEU+!DRaJWMaiPPGKJiE<3rmK($7^Ait`^DwcAatv2BA3Xx} z)}lLm6v8ITFvk~Otg!vRd99Kf3QH{`t) z-S$QRlQ;p`;SrL_yBmHw5Z|op5t7NDd(*w( z2%%))93hkx3<#m*gegKOS)d}ujp>*hL@L8c^2;ZBlQ`sC8CC*(D?>{FsSGaxq%y<= zkjgLddpPyngmJ6+l(Vp($U;~9eh1nfPPpyJ@~z^*wZ8C|N5-~b zb2x!km~1uQ>my7qWQDinWJ~O!8Mm76v=qiI>FF&wZizjda4TCRXr8K+&5ES&q8+th zb0`L_=3_;&ULvjM>F=#aptjmIyY#J;#Xo)!#gI043 zK^C-R#k}00CFW2JTFuv=$%2-QKP@+Ci8-7=E8MNsd`XFL*DJaUO3?Puj9blYE9@u2 zFWTuAZ4b@3)!g|K_7my#EIDq8Jv8H1^Px*&+>-MXOO9J&56!sMeA-^45ov*>+axWJ6adl!$#%Z9Kr*=2s0c3wNUK|1BEvPp z-^h@SutbJ!ge5X`BP@~O8)1nI;Rs7)7)Mwlp&b23OP`;=3OkXV>Gy`CLEi3-vWxks zJ-nPZN9}$uyD;dlb&BrTm5Qp*nSM9a4v`&$3ThmuMFtYynvbI*JxFQL1>2hE3W5G34bp_|{knwMCKhaKu1bK|j;<&PX`5KEt@(T| z(0^p}F7ft0Zxh?Zf^E&s5+JroJ0M|uAKSz>v0z*C`7~d>9 z6pZY2k+CW5=niU}XyC1xyOACw1eM^uPu^8xoM_;!xfO_nH|cjJc;A3GF-|n_)_lqc z32!oeoZx)}-o!Yuz}vhz=I40Q-2%sxfbgbouO)cz!7I;@RA8ZD~+t?wp@xY+Q zi3Z-9M@*0&jBdsxsBxU&4IH4-Oy0nzcXZqx(l)VRTl2CK=s&V>(q49lw2c$CpP6|E zc>k?VivyK@7Ll(OSVRIHMh=)YDnRLEZUZWv1Ts`QX)i*hlc@rzbh2{*l}?HisB|); z2$fD&9oT~rWV;wEo?$!Ws)nIEq^D_IvH#diMBp#UQ)X?iS_{XffDt)~mkFy!*_d=kv_m|sb>cy| z=3@>}AClH{Lihm^C)SAv;hO6dD8fk|P6$7MaAKW!5U%<56DY#T=4nFs0fZCl#Dj3n z4HYQDNy8u^`~bpcy|=F{|0gp(nNgzy6hC)RO7IJPUU`AjCZVMs>Yv|~K5c_IS2 zW**0Skx*3v`2iyLd@~A`$u&0(u^=aXu>|siASdRDh{-h{Wy6A;4236<9|SouPdp%3 zeNPG$c#l(O9^bAh(NA+R0ZqB=w?*{o5u;{&~YryH>W_G zcy|=0zvehh$Ne5Po3INvz|9@Q(1Q7rnQrKp{y%HGuLU+omF^&)HZvU5c~ zDDPSsyJ1D~x%`3r$47P+A8!$-C8jufHIJl-`-&uW?adOrMRcQA^BFa9Uy*WT>CsCp zq8q)M&!35-mn1Pn*%LJaY5r-?Y^(ZcrEQA^r zahU{!0GXuy2#`t2bbw4!TmfW~ln9VX+E4o5B5)XCyFWYKGH(tH^FcfBWEaieSaV}+ zXH0X0%VB55>dPbB7XzEaDaDI|wx;ZG9@g1qf^oC(w!(w3_cf5iQ}!VH)j? z1FSXlB3AQ}1yLW7{lcY2EU|`O#A>b!L=hWZS%9^MUc_o%eu?@hy44rf8hR0{xyvQ$ zqv)DkSZg>TRe;&7E0UFOg>Ca)XwbLosNVc67Fy zMrXGV+zn05p%=8O-yafXTILGEWu7#k=1>gUr7aI?Bp)tyc|gsf7qqG$!4hTK_#V9i zn?o^ZH4k;kdMUc8F0eVAKr6blp!rTC(VYcyRe|>60Ba4sh}GOV5cN@XjRUMToDlnV z&xiQ@++9==xVuQyDddE0qarGg%#8`lBOxd(kE~#Y<&lvHVR>ZNNmw39`oi+aB(AVL zQinGxvXw5`>y{SCFpTUMG87{#kl`3vfegvW3S?MDRv<$&vH}^Nkrl`gjjTX4)6kZ? zFw@YBh@=IgnWlzj8fE}h{Xhdn02^h!NDmQuCIdghxo(ubnO-MoR_q)pt=Ms$+o~U z!jc$49V|7Uk(7WX**scCSP~;R!BV7ksrkUUNYjq&y=w<1#tKfzlxSUQ9K#dUm?k%1$>&Y$=6PB?d6BL0g!gg06B9Y% zJwQ)V^Q3Noo@8`$yc%p28|GDaY4Gy}qz9p%<`vi|Hq2|DFb=?BWK=R?e!R5t{2CB^ z?@9Af4}2|8=1R+1dUeJJauWw|ebn zK5F-SF&)WrIH5M5En*wh4{ac)g#{Mr=BWWctBkuPRvB1F(q|XcH;qCYC?cd+6VjXQ zVNt#Zv1)9QIQ^Cd05Omg(!Xo|X3;zuTQQ|Ow29}kL}V}zY#t|&V+)t6>RC-}UN*Lf z4CgMBU-R$)F;GNEuc@BG$0FU>A|d@&g$p%ML`biyK3|1Jy0Jw<`mK;o4djG$Y>BS< zpg6V$CHpnndoF>^6A{QY&mCjENVa_v$hV8ZemjboCnAt*E?KZ3kFH<|Y@Ucfu6YHD z^8ek* zf~yyDY$GB4W~37XMTGQPs?+!PK#VQYe*dlZ`>BBEA50H*gO$|T=QBK>qWBAN+3VjTqNdk0(n>Z zz?`2$&-V!&dd8xfoO5hcunLfY8nglgxM&4PuM({QnP)&NK=vij3XlQ@tpJ&wL@Pj6 zD(1o>a9e;h!0;+CY>2i3!-r@U7)C^^z;Gg31%?&TDloi=R)JwgvHj$sW+aSU@vieuPAQXIn|lHwQ^krc-;iKIA&O(exJjMAIG{-{k2 zbEXIpmB&;fqVgDG5tYXN$?T14eB+_E!2KI_@Ve&@M%Z#}y(=&yB(?l60JC%ahW zgXV_$uTxVJN;HEalxPM}DA5eIP@)6Ej7evxuVzvkN;H$^P@$Q5odaWLi*XjXztsaoq>H&GJ9+21S0eP(+kk{$~ zdF?pEYs2~1z{j(Pi=BS2m7VG2!=ZV#uXDYpHnpK9FO*?2d7%u2$qQw;OI|2LTJk~} zwvrdh(3HGThM(kxGQ>2JCkKX-08*JE3m}ytC4f|hl>ky1S^`LAcnKhtAtr!ShM64p zZ&`87ZEjJEVAq?~vjd!b6H@ z7!N6)p**B`hVzi(8PY?F56gO7nf}cD9f$|AbN%(*UBlgjVZ*5fotP|!&%|Ug5w;kQ2u6hnz5mK;(oW8H6i6#J-H{4$28bG6>fdl>0IygK*>Fa$kmI5N-fm?#qx2 z!i{)~eHk~?Ehh|FgK$&Za$kn5LAbeWxi4dC5Qi%m9?Z_R2g6bJ_@Gq`uB~*ZE^sBq zF=<9p90LhFV1BG29|5k0BRPc?`RV%46t7R34aLxRtTw_i>A3QF&l~;g-mv-v{OwZlNst zePDj!mdm2w2j&-U(Jc9W+`3s*9=LwtR?ni}2d-bZHMHpWf$JB~XP6%{R~cR|21Tz~ zm_N<2hIhDwWH#AMUWCbJFbtE;hrY$m_LWHYtA!wXEa#dnoy z9N>gASr{jrp*EawhTCw$8FIr3XV?uVoS`?IaE9M-!Wn|Y2@lC|TFCp;v> zaUBlamxp9HuJ?ia@{kP2bwzMr9+Kg>ehKW$x#kH@I8(zp909BzJ8;j_g-+h<*<772 zE0Cd4S%FM(C@YYmOId+T5hyE=p+#ANOtB{`kfA?WflN`iIsa_8P2e+YRbhz?xd}^T z$VONq!#2Va8M+ac$ncG@M22vLB{Ga7ERms{+Z+z6Rjb)v#?MZh3m;FLqxj}E71JQU zv{7VF6z0aLxAq5}*3=0%yb?^*;+0?m8LtEr+ju3IaK|gbFaTZ&h7|BhsNe~vT)FMt z4)w}>WW|&)0kmMsngCjGq>1+k$C`L8INHQ(!SLpmLx|L}H}(2gJ4I`~Oy8S>VrZV` zG`H;Zds9<5LM($ngjfb}2(b*p5Ml#Cib=kRuVvCLLM)SD5n`E?`W%8Tm?R33%cM<+ zT!yG1av7$A$YrPsBA4MRh+KxOAaWVDg2-j)3L-ZUUvWFoi0|b#pCNJs@fEk_4Ef$b ze8p`vL%ufhayjJo2@6r#&3Co8=(U zux~Y143NrjBtR-t(E+3~{0NZBRC55S3^xL#GL;-aD#MEasZ15;ve>m{31;sRA(mNl zBE&NEgb>T{6GALQPzbRMLm|X66onAWa1=r;LsAH_VOWY=Y68BNTWKQ1hG8jgor(Bb zhNWC41@E6!9%ql6mzuLn?QSu3n+7VM@itUGlboRPL;IgeSJ3Zg@)lG+lfapSR)Nq= z$d&mnotG2Y`TR;bqjRy{-Ru)P%McVo zEW=O;u?$5a#4;R(5X+DhLTnh8@?ED|#;xx<)mCg6mg4GE0HkVRSc;n)A3AL0J(!&< z@yV;}Ld0CxLZyw7FW#A+)lu3E=LK!H@3uV%syig|D$qQvtoxD&c(a8&C z(wxf@z>*x8S{xviDa8R&8BzjBWmpLym7yhoREC!TQW;_bNM)D_AeEsefYcz|#8u$( z-^vx>0I5N^i7USW-x`FQxZ)e|twFenJFp4()*#%(9oCfpR_>@KKx&Y>iF-;H@U2YU zG;mlfKXl+~Tyw$w-pYCE29~%?hWW*1hOjNe`r_Zo)T81u8O9fv$<(6aG8wjac)+!G zY(FH&j>nwuw~Eg6o)%0t6R$AYOx(g`Gw}<0DxS*xAjyoSL6XB+n@I$qZ)UOqNHUWWK$4lf0Fulk2aseYL+pT< zgh>-9(M+yzIG0+hQ~o^LA7p0sy}Q+GH|^fj!YaV9DOLf7RIv&$+=^9zp;)W}4AWv2 zUaj1sM9h&82rR2NN+gj8=i6W3&nkAEH%Y7!j=k!-;4W7*<59!0;kk1%?^X zDlpuLRv|n)awA1xD{%8gXcZWCTz6O{n44S+(_r4TdS_Nf5bTQ>R+srAhP!11F$^su zh~Z%wK@8i<2x2(3aEPP(vnTp%eRE^_^xbX=F-#0eh+!f~LJSi>5@ML>kr2bgj)WK{ zawNon;)ba=E=zp@BV-E#~Av)UVl`q_4`*=sz>i@_eO($ zR&1LkbbDAFZEx=M)=FU$RQvV6ajth4Rqf=R?ZaQZ?c`nUBPZ{!+}=3WsNAILyWCZ+ z9lg&?67)vHeO-rom8~DWZ)ncOH#fwUZM$3cxUp>Ox!gR{+{!>&Z@6o{r}BN9oxE3G zLU7+6hNW;Zd$CL6ZC~kYb{1FJqn)!oJBvfwqn%NBeuoDTJkUC2E{(9Ngx>>Wx%Zq2?) zA)6zdp}*4|%gy1YxM@A8*;?y~tLcGu_u!h1Hj4gWM_lE%xdY&GdwpwQ77g`g-fQg~ z9X_{yi{f{_(DINR_Ov{{)*fYft5pm{G;R8`XWrS|$g{O#lxLfRe*dzFz8;npdzLjD zA0zVu!q6;_ioS@lZ}n%~wLx1(M~_f#<^yBRd@ve{Skt5H_v(Q&{odtTOv)aQN!j~F zl=V=~?lHg~40bm`t5IoNk4oDUBHDVWvi6X0Iq$YRJ2DV>yT2@8)D^_4fTA+wy5`TK z>qWm?j0Wb-_+hi?6(TtGsqUcN%DS!1>>6-I)b-G0-V3V@Q(+gao;e356Ivpw%3xMR zRhcZ{u_WxYdquXMcMF-qyxJZ$ zH_D5xugEkmJT6;Cg}HLO+21U7JMwA)(C>(M1ASzwEpu?a5ZBa0pRG)z<(Aw|iPF8L z>-3$^`8t>$7@*yeZ?t<@vfMlkA!Sbwt@bOu{?$&=S}(r2(_c3W7HJ50XtiG%h->L# z>uuMZW=zyvTVz_nW=AUqrF$m2%%0)i`EURwcv!}kN*BHLPJ1}Zn9O_(<{{y3?jJG| zM}!Y;_hz2uci}t~UUSTr%!dOHur=FRYx!E+ED+^Q+jXyL2RB{71{;F#0E+#UjeTyr z>t`npY&^3+uyNwPv4IT_h2zX~bMt9>YGyo~Yn*xXv5Su$?Y1WDYrn$Gn;trJ?8DRN zP58b^dslJ$bNKY+3l_w7agwT=KmgjW^>Z6zWvv~(M5w*Q&sejZuZ>OI6Jc% z&kR>O?X{I>p4}=2J4c@>$A>2k_rB(ro|rV;`*#oT4fk&R;8^%PGOtRVt=>RFp@-*D z=vo9zs55&bt&@+6?UlV@QEu|m{QtfCarP2_Oix0B5?WrE*}2!X9G|b0P;U0ZLjD#ZGxj*lgbLxYb~%COvO8%PFDf?9G~M7`(;!=B?HI<=>o`HTwHm zV{QCM$oV;OdGo}kCe3ah(v;9_UWT4Lmnos;V{>|TZx`nJDCV`3PLoQN`9SxCqiP4{ zFK?}0>+=(1;^=-%Jo(~SCH0h}dSvKQLdho`mArYL(CHT=+DOmy-8xZg{7cszuC+VFl64d3Dz zzk6Q3K6`d+BR+D^#Yrp5EBi(~F{H95cy|U-mmyZXoYtGmkNc69nv(ffu5dEt*``pRRgv^c|-dfGp)02Gr z`aR#iWISeFOVj=RxZO!Z|N6ONZ*REJzh?9HQ1=4%rg_3!tM`&ClN@{apS@(yweKGf zTi1elY>4pID*oj26IJ|yeHDN3U1PnCdHa2rZ<-vub-!vp&E{s7@nJy;<>vM5g!w=T z73cNr%_D&ln$627<5PeVy3O0mojmv_q2;`-w3}xABov!>*XUN`eVX*V)kL3!p7Rz5 z{1BgnhVv$2c5Y9Vg50Q{go5*~ikRs<3H9cMAU}{Nq2cVucCHorQ)lpQt?Zc}m{@Q% z_6x4>efe0yHLth#%ly4Y@9?OEdh=FYGY*-|XS=^Lb91Phw|kR^hwR79X78MJ^TCx? z|7vfgnl;M199vt>-Ioyj4d2avTJ?rb$Feh>+`Rl)-n@L_`b~W+{#2WNKz531`=B=u zRjyW7^4{>OxopS7^ki=iNbNb6HR8>h9*alSh^$(R$Ex*sR6QP#s`tmE>WO$%eIOoH zAM~iIY_jsnRkE7;q$*iWeKM7-rap;ER#TrmC9A1Vnv&Ht9+rB@BcDF&P*PX-S0mZ! zUMo9o)=cKqY@?T5+9kjQB~1YJ zeJ*nXsPD6!2%x^t8Y6)EJ`0Zk>ieul@Vx02`Y)h)(<}1f+Is~)T>H3j@3S(;N3G9l z9k0F53LdY$&#J!VK$x}cO?&qunIeAETp=jQX1^r_C|mZ@<_-vwQ2njay_^%e@M?5bMp>oY^Rtmds1 z*0Of9R#?m0&01kCRgSg7KC1=@hJBU|5QTl#4G@KW77h@FeO3+-g?*L|E~SnwOTp9?yKzCN=CLSLWDIfTAGa|S|RpNlz!zCJU?>@UyWm|Z=dT`+gU>^^-rG>@Z~ zr{sqH^7(O(UEqPktmKIyxzBn7tGUn81FN~uDg>*!&w>Q2 zxzE}JtGUlI1)BV$rE{+>8mP&A<{_x!KGP6Xai3WTs<_W21XbK;4uUG~GX+5v_nCoQ z<_At>=kxN?tn6aDyV)tSM~Y^9*zWg+9+$^1eTT{Vi7kp<8tS&{`dkVl^!2#_M(FFa zPD1GGv&cZ`>$A2%=$7e^=bd>$AEB)OkJPF1HigIYX^Y3K1&CHx;`rhfVw^l2Y|Xh>jsxL!vlM5hBN)L z&9J@SXHe2+n5>`5HiNCYTG|Y@`f6!2*y^jL&b8IoXQALyhuP}uvr<6l>$6lq=$6;NsctO!Au{?^(`K+WZ)E9MO`E~izLBM4ug4SsbmsNKZvdV7 zgabfbpT!wKU7xiXKwY1u89-g1l^H-?pM@DfU7vN?n#0WhgW0(vAC%WiWM>EcZuaPQ zUbb_54vV$7GAsOxhH0Z>;`yEOLeuB z2$t$c*9W>prn7U+wx_2Fq*fGXu+O>vIt-udUDg zE3d83C9u4%Sx@0fd>bleosPOL>PuB^}0C9bT`#3in* z&%7nBtk1M1uB^|jC9bT`q$RGb&z$x4`FXZxsDD&+I_>qMH_A?z|MZvu+75~x+s{a2 zcjTS#w~CI>t=w%vEydE+4oqpE8@o8IeeUnV0OGSmfGO>>RDdb%vt)oN?Xz@%Debd_ zfGO>>lz=Jiv!sA2?Q=EkFfvxFnkR|cz03VUv*?zk(B1%<$7;x-wJ=#du>!-PnQN=9 z&&mRzuFsMJpsvqi0-&zXGD1Gmd>U6?+qi7&vo@2rtb;VX9)sU-DfER zR^4Yw0#@B;X#!T=XNdw<-Djx+R^4aGf~^DMrFE}I6|s2ndTbG_x=+Ahq3*K`z-sQZ z3czaavk1Uy?z0BKYVNZHz-sQZ0>EnSvjBLD!*cp~cA+ryC!3yo6NN98ZeGc+>Jb^hcXjmjO3 zi;=Y@6Vp-uk}amrq2Ti4lfqe4k#o>&n-U&yIDV#|_6&Q_uCUOXjqf^*tDFU&u&)Ni{u+L%z zqOi~E1){Leat5NX&-w$9o> zsOz()0jTS$92xsOz(qka&UxRn%t@C90^;8tTgY>--bu zDwW=FG|1cLvfPVi+HH6_ZY|5diyLXkb3*9FpzrtEI5#Q`>a2ZdiyL#fN4Hb`u5uH0h-%q)&Z*RGwA@; z_L*~lYWqw%K(&2l9H81h6An;qpZVr(^RLey$j_pRx!BNO7}P}<}lXADJIrLcbF5LuBXo`NK{dug^;MCK5HRSMSYe-qKf)V z!OPKOzn~xG!`25+Gv;byw_g4sG`xhK9&sXwdpSE z`dBik*VenJMPtdJUK{YD7L6r?#==fsJMog9_u7mXRWz0i>a`^=YSCEMpk5pEq89bJ z29;ch`1HJ_o<2(_Nj+onyU)T&(w;s$50ZNNESnq#e;&#jz01X*=rs%TC(#Xt?fG{; zrfMm++pq&u+UM2-PHUh05IC)U7KAvheXd||TKg;*a9aB;A8=axEGBST`z$SRTKg*KHA;)^Q>v%7pH9_XLNKC=tcU*9`jxwP~ zNqk;6W?%sEx=RDA_~}Z&!=iO`v$yW7*_-AyP;)q3>Eu20py*NaM`u0ec2udy zX20Jl!mBYH?RZkd$t%O1-YDPpVvhR>I(o&`GBkGMBfGINbNtxxqZ^}cX95oX>A$-7 z#k*hsLm$73M#{-M+lOC$_~c#fBPZ{!+-VtGjmKcOJI>old^f-x>{9TE*pjt1~+A8?H8C zsBBIL8EW%kUatzdTbNPjEuX*l#giZW^dCg2jGV{4+7ZOhlcWm5bEZGE3N+3 z-b&sZUNtYSZLM|Nqfybifn`Ro(r@>C|3^MPB?aBps8E8PpvUSS@W%A;2tYrmX;d@? zr9b;laPu7eY2f#5wtO?(eXM+?-D?%w`>vj#${nX({KSMvxbG#`BjL!42Lr(Xa1Lj6 z^U=mL`S$8?Yjd+d7?sIo(K5oR!|l_)Y%{@I^8x1pTi{GZ%rAfT-WPxPt@pof0%Cq~ zpF1CU*BE5}s;v(DoMj4QpP8~zIb*HDe*JO1de{%5S$<@Xd#@+`gB6IqCVM}!(eDZ;t*-E&vdKiw(%Pq}r-pkB7yIb@|!vo=C57|A4>^b4UkGPiai&#BLLxBm* zvUEQ8AAW3F>1-m=FN!9pGGCVV07yok7e9CJi(mh5{@SZ2aMn-lFV{DY$&yxe;Yq8y zB1~GR4u8fJn2W_Zrw)J4e3NI+Af8|X!=5wfO)yT`TVEWk+NM^!8N3^TvU+Co~1Zg9{%B52ip+5(vZ-PJ!UJE_z5`9QaMg z52Y}eHw=Uwp(qNDFM1RlPf?&F3Wy^VMS;Sa$S(zvMC`jWxwx$&VMi#60)^W=evMyC zxj;uQAdX<7;BhlH^^2c*-9MhPr)c(A{H_ZWHj4a4LFyCnbExosDRBf71)+8gPAvMm z;6%y=I&uMVgi{oF-ZeO;FAO~Qs*j~Om^TijJ)tQO6y^iZ<|jOqmdLvY>MY74X-{Yh z1ceP#&*mqMx)cb30)co!Qy|nAzBH<*(nv{ZL_DD=5X^O6*N(9)>~y^O!Zc{gom_^O zM${9E0-^fAq6dQc!W0N90)cu$Qy?fDE%LlZNV7$A?;%Kg!YL5^Zlx(q2!6NHQa+eF z41^uQL_y7LDF52m{&yAhhO)wivR~tq)mds7**1)X9l=DwZ=0j5KX=<*@0uP3=Uq=} zSuBgc?+;u5a*8Hj=kL(CUu%;?e~pS?0c1+Xw*V$^{eA&yVEX<7GD6{70JAIP*F;PS z;a31N`sr5yvlj4N<1+mx&k~qe_AG%3X3r9sX!b0D31`m|n0WRqfeC2O5}1hgEP)AW z&l1FAnszc9mDz9KYi{J5qhheI7o?mz{62G8fxBIpI zn!AQpYjzzob9LO5V$tIbLxsp^rvCAq{bPtbCdA3dzGmtznPv=KATBI#-ikdg%x9*! z&=VKb9opi;Jbz=+Tqv&^jXPg3#j4*6#TNJ2o8pg1Uo=i*g+vY<*E*jylSGW{L|vaY5ZN zAui5;bD(v1&2G;=`t=2~Wsuev<{2Q1!Cr~=1#!oOxCoXns>@MFna@o5LQlS+?r@5W z+<#+VVOX>zzA{C{oKb=BhEq^TY+fu!Gpd@JQF9Lg3lve*9ZX!5&8Q=9|1XANno)N5 ztVO@LNSk_^;-W^}!Nf(-W>nSg7S7n?!hB{bqvkB5B=QAuhf`b#EG`tzMhTR&%*0`8 zMcK8Y5Z;&&6m}h2fuOLvh6kD}1l+(3T&JK278JxAPC+5C;;t=61!X=nRZw~=DC&+0 zabdS@A&@K-hJ`?~NKrBOdK=-52|*F8f>PLp5^w|QdZe@IwrA53-k1;+!7_!yEeioR zq#X-QnSyv@LQvRc3W;Tf!l)4F1lEWnQm(Mg6%cnYaZzsMedfNG{_ymTJb`6}!r@tg zWRYfPH6;t;4JIgF_PKjsy!2P{&rA;rfn-rvI144fPUbUH4@yrDin@b|i=eyP_2n2T zH=mi}LQz~0cQA4B7E?rh_E+EhK#+VT670obQ{!-(G(LGWxn{8kAC7x(KW zXj5E-j0@rpZE<1wJX7rJ3+_J6G%9E}DkzY?sXMgAMSVHe-R3jL3fhz}0_F?q4sCH! zTaNXG`OFj-diqh+9ophz`s;o@v+ekikNo_~l(vBGE1K!Uaw%T~%oo%hOk9*hARj;S z)~}iZJfW)v?H)Ocetp5bYLM0!;no+#9ZXz=?MEp*wkojDtEO|SbB{+s+@UQl>I%DWaH7Ov7x!4f)Hs8^U^shM5Mw>CE39yD02Hf9zkSeS)e3YOr( zLcGEROF1%J`RWhugbWtD$21ouSm+~;X*bh$H&d_-BmH$ISjw$_yE1G;u8Loove#KE ziH(&jf*+e!EOX9?0iEI$DRK)I^8li{P!S|Brty?&u+TM_)gzqtX+QiP3o>_=jm|%I#1k2An{wM!> zda%eXSgejeVq-fU53eYJ`q?CQI@BHKw?+h6!RtLWpB~qBtyc8*+BZWAH zi4^V=@Jcf?y$D(HbmEHI1KM2I^@n!)&JUHI!j??*uK%nuWW#A+7K2*7tJx zo_a-VX0f`Ja#4bXK0=VnFa>28xvGN+mU0s8FW&d8#Uz;E0$<@Mm>h?t3-MK!VPry& z2^RY@Om4(QVPcWn`63LK@``O<0Xl_=lyZ}0x^zoZxO$h3c67dTtzW61%lET4iS{bkcgl(hD8LWF*G75jo}eNX$+AF zN@JKrP#Qxeg3_Q|Lc7Y6%th}i6O{(#5_)s<;$A=c^vvaW?Ra*<+%j5hj?9&#V>gc4 z+_Lb6SG)&!;pjX4eqXwb!f1OkKr z@q?Zau$uFUI0TsPbqaw1Awc|~Cj_hpcp?q~X5%e|K!6Y+esBtbbB=v;i^-g`_k8WN zdv3qn-(-&0?i;g_?)CS{x~@_2`uhYco`0WoCq4f@Df&JCKB=fY|2|1qo`0XT#uj%2 zvpYLc)w0L((bgdEWaskU`c}SPWETegW-%O^M-|6*G^;jSn&0tKEL z#4(Da#%ksUTcgQF4YRyTQKKPhh+`B-jn$aWB1H{Z6sN<5wucL4>5Mo=anx8%_AFA= zFs;axYc%8<;uytIV>Jx4NKwNys8iHvh#KM;CTePCX!PZSKRPshZd$g7W;H*wNKwOV zyQHYm5H-XxOw^QfrgoiY;VqkE`dZJz8w1QK@)S3Aaf9}ZQ{c#^8>?BpMM^i!E_@0b zyReaM2oT38jvA}+zD0@}W_Bn=jfSWpj&X{bclh2dx0ncg+fTml|DLkwdZoXKKv@oB zMg(OAl73L5A}f&KO~kNs zxVy%8I8?3nH#d&O91i9iQ#dFI2kHwZ9LoKemG}SbGzK=8c=!&7n$;Fe7So?a}?e&58c8?6eXA@|&8O~Ekt%hckom=FsOocq|edXK!6+RqlR+|oyFC5aQ!`#Wi`@(_vf(eIk z>*%W0$=t}Z12YGb+Dc*CN(OIZnVJcQa^Z0AFTL)*>Dj^e!a?DHhVN>XnNdo?U>6MH zu3%!JObs9VnvYnFXZVfh%cCMc-iOfeer+)rLX>H_%cahH26*p!~rQ7Y=c4E6-+FIE0e7HN!SInZwHVT zMv8?{u|QnG#6r1zZXEjY-^1qxtE~oXmE=1XNOwEMLa10Eu3%!pJ}-D*GAIlN?@I>a zf)ow5(ID@O3E^Nj&h5LKr0{Tq@6yoy-rXa0nI-#22cLO9qh z9DEl`RjZXlw9u8y}AQelYilr*? z1rrVd=U1)P4v~igsaI1t1Pce^3nm=OJ7m7$@n`>Me0HcU!P>!m<5+@D;Seeus4tjs z2$vn|R^bqNqcw$tnze($+ZZ)5;ZSCW?|I$!HwVZLR%?g2!=Xm%)s!8AWe4I5CLGG8 z^q+43ug~Jcp=$LAMdWJ-=F;VK#KG=}gZHL4@dXnO;fkf2RX9YR9f%83b_kXoh%Z!y zL)GdmXyn-;9R^jib})FGzIG^!rMEo)s}?7teeXNetzo!>{ zgP%Fr;9CT<4)HC5;HFXWErOIR{zWk74}E_H83*+L6-;U2`zx61EPRU~{fkD$dk3BD zuL{Uu;@kUoFahpe1{2}lWiTP`T?P~5-eoXB?p+2G<=$m5VeVZ9AkOJnk^ea79-HxP zD|6W}X()P^0f=*MsKWbq0OH&{u_x~jK%ATF=;Xfx5a)DW%72`52a$bamo}>fz5@{F zT*ueDtw@|N?gP@#%)Feg*0VQn<-Jk+@=m+Ao;}%X7K2gV?zKld^SmYf(La0lw3kfp z@wd|MiheF(yfnvTH^&5Z%Y^pRC*S))iz|!;#LE?qJb(~tXwk=x0c%g2Y z5HDYR>LwR_m9xT*Jt!9K(i7NAhb^cWln<)R1185G>~#ffxIB zZvz!$q)6W(JK?cCpJYNUt!#QeI{C zXJ_uLPA|p;gT?A3?!p8MGjW}QC3vt9uS^J*{--yNO%E2yZOY}U#JEA(6fSn*BG`bT zZeilZe&r&VToi_jV9qB6a$2|8u3L~^VS=Ts#D4wX{H{fE5eycqD}olL5+mK06f6pY zg?NPt78M(mR?ovOOt6sFehL-^!9u*k1WP$x`RBj$r;mnRwkV7i$@-W$CWVV_xIo>) z#EXh$i`AQH3zJ;Pka&s~+h`G7wh*r{!BXz#|H`+%{BuE*i`{d~7MrMX9HrzAM5mru z@SYfT3llGO6EBCJ{8TMyyx2WZyExNg%%wvqUV_I9eL|Rtm$DcOKV7M?8WXIN=>w{% z9J4LQpl+EEFLu)}f_s$a$x(}=l#7#J$mVg17Y*@3+@dyK>~0)hn5!+wnY|P*8sdex zMQgm+J(7#AQ~rRR`%?zmUhnUb3PC@R%MlwDL22Zqlc+R;F;Qt`?_E?HbJ{}CZ)7f7^xK&7M)2F1 z1H^*TNJZVKNH&O=IZ#P)44X)bW9URu9K$D);uu1a6vr@%q&S9BB*ihDA}Nj`6-jYe zR-t16q9n$>dMRj3?g30maadNNTWqqv!?FtZSf-@!u&hF_ppo?*mR0Dmgy`4NvrCKn zrsEvFqzAI|`IYiH$20xzW~bQB9x0mbVY}bE`AJ8+C+gJqlI*`WZR^dstvAP-YHqJg z=p|*HpR{xUwzHSlP{3A-ysoSkC|bz`V)Ib^Vy=cX*CtQID-Fx$Z? zSV9L2ee#S6mfhR9tXg4m3l^)VwXlgrt~O>i9aE^-g$mFqOr(@)W#gk4|KaqsA{QxE zr`Q)IQkeO$6e*!2g*b(Y6b(7Ws{UG(NJ-__+~=j_atd(@6Dj4o{{Q{#+gC&86ooGY z$vGvh>J>%`ams{9`43W2L4rR0L;^ zh+9&u*v1OdD@?GImpT5Y#*hC=$h2a075$>573Nq}TGWSL)Dx#Lk)k23*u8SHFuN+u z;nox@cCjLtR;=DQDSO3l{MmnJ(JPj#!OWW<%d`baI4sp*;cGDZ?#zVD^2ry%PUG9X zZ?Z6%g>>;#uqX%?;*|-(68fB1)#~WV!UPLBB$0weL9h_7Fu_tYWmx4GUi;4Jt9!XJ zOykQYa@8?+bw*mVgkG~yuQ0))VS#V=OarzTCKxOv;it?JKC{ppWRwPr#y1+|f`z^X zkb)(Auu!is!BS4S{L0`T1UtvL1_$!2ufo}L{J*TB7)Kw8WEJn@Q9!^hDZdZ zF-#&TjiC}jX;3bqTQGAyF=BZI7dyu)l^ro5vg?-188vD5k7y)S<72fo@O z?}%^l9MjjN7T-o-hEh`W*hLTCH%$1Ht+tPSqlQ+S!ZCm8+JKp7Nx@?mJaE@Au~U|9 z7k>N`?+IVFDIENlb`UdnlVV3l>=4&Xh@Fl9_OwOptoq+)w>S!L@89^HFHDK+SNb~& zARR2sFt4;o(lcsQq(u_k$%`aAGV&ssUXQe4$vRB_E1A+r`YXu}f&5o8a|-xXL8Brr zlkm^t4w61TbGkaaHQE{!_dU8@44S)LON;b$&-Z_1+H}O-zS7g;`f84QwvLa@`aV(Q z!+tpfJT~rYwSlMpQabyuO|yz~&MMAv5MpkxOsK;@`9nW>Zc43dw(aDC#heeZpu9y1 z7Vb1mTDjV-T;+m=dSybe{ENT!6;B2Y7OP&*!UPMmrc1#RJXnZVm|!V)7QXn;KRB&S z70oQwrOGVkYf~AfuM8tzVS=UHN;v(6M=Z*)Ic^WKtrzuGVRPIQVg0cwROSp7pi`Jg zDbva)-y1lsSdADiN)1+JI?kyE3tfXzuQ0(<9zuNW&-VU@pbM6zYK3vPPo!W89W2x< zOt7d}uvi_MTA0kjY`LXi2^}m|;uR)X$_2~szU1~x(=&_Qg2n1!J2qD2YGY=rIE9K` zr~sY9M9S_(5SQ=$n;|2`>KOZ?L<%#LlOiQ_q!6btky0MN{m*ZB%O8h~6ovJd9DR{Q zpN&MCwxOr+T76uFtns?}k0Y^cah z=b5if+x2s9*CV~c1WVZ{{`Tj;|LXL#A{Z&OE#b9u(N67?BxxoMImkmA?bj_l0Zc@-KX>d|u8CE4;VS>eepG9u(USVR9 z+a@LqmWnXjA`IvhCQ{0@@_`@y!1qi~D{_&duoEWd6w-1@krF;qh*Ks+%D3G6f#;@2 z%G+kX$gMj0?778K{%gMVd;gCqT784BE8Q9uxzHfj9Y_izBY={^2)-nRF%1zppD{yw za>AJHX*pp`Nh2qWnWvW%Mk?sJo`RoDS;hW+l*RTcH-IC!W@9)+P#Qxbg3=fk5tPQz zh@dovM+BuYL?S4SVG==U43!86iySQ|Vojcu345O*+(;_~gz zJAU)2Uz-9vb5)~6Tui;Av5bokzV2}3alzEIY17Vb(@r8Th&!0LC}-Ed@W?}dd3s#r z0z_&v|ABpqvnBDBDO=2Wumj-@CMe2XIlC7FC6dL|v$SQh__2TTEw7pqv&xGoW+9!D zMYv=^+`+^}nJj+h!>|2oQ-EiRD1l@#b)=&Vi|zZ5y>g0n9>YR_o|ra8iVC}^KzM@* z3j1Us5ENDuw9!{kOoulGMX;bC-e7{FK=PeBnZD2O+hpa_~NtnRXkK2tDz6DcTy1qJa26BK1*_Ux6re|8EN%VY|xrS|*( zyz}K#w9}m_$kHxtSJ-Y>2&^fHHzowdfBy4V{`?f!$pppJn?%a;>9;@eZLgT3o$jDu zCM!~=2$m^`H<+L(yHB70?mz2Jft?8oiLUI_gWY9R{LH653T!_~>=~sK-E$u$g1Cc; z3;Sh-L|jbmK$UUvr(gJcUo|CcJ!T6DN06pRs-bLaD2O|lxF}Z^fAM#!|6~gAg!0AI zvqEKD9DePCk;eryo|nofyD~~5E{HoO#6|b3-v6u9<3eD=V(MYjGA#a|Z_DHE%`$`3 zDJtxuLSV&Byuk#8eGw%P6jO6Wsa^EFS;9@Jgt9H61cHKi!zn19@G>?xq+Oi_wLi$`HQb@uRkwUgSMWiqT zQ4)nS)Ay1P!$iM?7$*EB#4zzMA%-CU2{8-f=V1c4YbI3G zU;M`(`uG$;nx3R^>{PAhffg!u%qOSV(HT3`HJW3mHg!T`@$wF-T2k}Iws|9+cZh2= z$4-6f$Xyvb-*?-Ki&L|iPfmG9XWpT%VPdD8JNmm9ANsQ3HJiETWlD#1@$wGSFiAUU zb~|a(d55@WLhP&*|MW|P$4=dwEu0w#7!GQa1`J{-PRP`B!{ zEmrI>+YTx3sK`48jniYNtl7T&6JPq-;CaW~U$=BM8?!BxVn;>n(AyLzg8MS zR7p{tYBnV`8+8p6J7wPa(cWu6Gd=HgWj6{HhR&w+7mt+%DROKh2kjXqaLUDwj-+GN zptjf;{gOsSxC2v7jpexy>%m>48S}N}8wN0+oYIBPO&RJM&9PITdfU|EZD=x+ziEN5 zVu5eaI7Vws?3CvyK6dHB4@_UA^n^=xtJtxqwc;7ztP%&MWs2=GMY?7qu3=(F%{t}y z)C)-#F7KF69wWCDJ4#}Qx`v6JvN`tmfBYjBn=-=9F@^h?!YvrGXO<$zHgeFOVFIUQ za=S?Y;lP=C*$+PF2)ls12POaZoDOY>PGF4IS#539+Lh z=_ov)CY*H0A*d8NwvmJO3==qI4Rq#retlXEB!87nUE!>=FteFYPQ5jqy*26@CU(l5 z&`({e{qB_Y10Z&m?|5209Z#Ekts_lq#5GLplq(3kf=zg_Q=fX5J$^S-+Cj|CvS~Zi zZaWn28YXti$6z;s#CC7 zv6ET_T5Gl{aSanYYHGHs)dbsO#ZDSwQyV+PHL7E0`FaIavLH<@YTFjI@Op>3Msw`c zmT$ddJ~^#-bgp-(Yc$7BefeU?d~%8%ov}k*qd9hJ%eUS!pPXVxXY5edXpWt#)&9<6 z^$ODdj@n)UagFBKsV!gZm`_f5M`_+6uF)JjRjca>7AxUqmN|LU)L~<38xJ1$kl7HcZnS&5+r_g7fNuiDFhCBF8dPb|rQ zC7ylggC3_2zwb<=a%%S0(MFNCj&6*)ooh{_M`r(pVY^kV<%8ORUzwwYwKgz{GzVMV zwf)~Q-*mcB*|629n-8wE`d51^wQBZ6(e7RD51K`{=#8>-MLy`Yd+RI3b}?w~ZUhwD zt5vIMgztFidpsGWz+pms!h}hgP3(HYV%fy%n%e~l6J~BBg-PHrAwFTkq|7GAesBF> zOwT5ATPIf6qhaGjZa<3|2u^`w7brldFp*L31$7iKWR~ViLFo+<`XfO5avqf{cX?t z<9xz|iG4PC%HtS^#f-|C-}-|;noSg{-77hB@TEjG+!WVT3yt_w2wNMm&*S*l2h9 z!~W(*yP0>gv;D1JD<8G{z2Q8^2fpt$$m0XO*^7i(zBkQ#&N=Ui^~;1h)9zVb(U7q^ zN3cL4!z7W^y|C+Eh=vUDi&MzVQOVh}uFZi0Y{bld=Ui=u&rLaI&Kv`Ei`saxI$40d zon6u~%=#-O7`p@mb&J|~u{vU~IQfO?Wu{+LH2_G-S zEl%+wx&>o(9swILJ;`0mbR=Q!2R*QUaS9p9Ef}kD>&02}Gowl=#e`2W#4Sv`ltb5l zdhvTdKV@*!Toou8FIIEc*bF1t>nGz>Y4qB5^cw0GCSJmUA; zDRKOj{syomJwb*o8Wl-#q}L-WjtPIsuOmApvR_9U0kY!AI!aa?$*Hp9NY`Fg95Wk$ zt>_ySQF#onh{_|xvbb~8vzLd}TK1-1|7xdbtrzC@_02&sG&isB&Q0GuHf`SOu*iOQ z{NcNSx314l*t2|YnrMSfv^ltiDG$r*LxvyxkH0yk44Z2?Wv!{*D+LP^FXnSoyy%M; z>J}zm%H`BOum1^)mkK2Fi`9&S)00LLcDzJq0sSSbtY$V;)Pi? zq<9G*FT^csCXb(w=!c$hvY=7%m;juT87{Iam8guQ&yZ;9|dOH7>m{#hCfr6fSdyi(rrc|7Y*a z1LUfz{12@}wjisrh(O^6TIqCkSfh{-2-J{-n1sbOR##QJi&R%NRn=KQFgk8HDvqM# zFyeyfxPi{#hP#6sDy}~j7u-NaN6--!9Y_7WO43c}_dQ+T_v)SVmcIW4h%L1UeMl z<|Y2ki%%_pm+t0;^Dl~i)I?erxw6VBKG@*xy6e&UFTmC=wEzlNh&Wx2|OOAX~^99u+Y6@rfTVn7sY0n z1iBZ+W*DCtbJ=7zn{25S0TynvuLw&5YcGmw@#Xj(ED37s7mv#F;!_GrKtENeQnCBbMy7# zs7eFjjpCOC2O<@vfsKx401xUyn&N;p(G&;FiKaMUPc+2=gQ6)8SQJfhz@%u312#od z955=H;%u#ozYQSF>s%Hp7?WTKaQ# zHMfv8H+$WHbWgWXJ^#wL6K`T^<3m0qmnnSw&z8A7DVrzN#>cN40iW@$Nx;X8!iW^= zo|(c2MV_uhitCWl#>cN40iR@ep91%;Nw)9-dwhpaLh$kFCfWFKD! z(@moB@t*&hbHSVc$uqb2NzC4d^*-F*r=gsMN+3O#s88AD-9qNw%vB?}o|R8bLf(LU zqApXBoBu&H=Xw(3dVIx80iT5A4Y<3xGldT-*1L^QLN-3W_N+wXA)#j>6 zG&|g5U#3x;wt{@s1jkNN*zu{RFLu` zy*lrc5bwk8eUfpubclOkDO31(b9Qx~gw}ohx&hv&n|b@^|M%Rb3E2D0L44dBn2peV z*6Mp7pJ@a5BxmnKzJ{4C@8eFFCKexWJ`oc%M;VpsAN4w$8XWv}nfK42jt0vLx zB=9m(Z3X$t3XYwmu;WurU+k=p{$fDl2A%Iez4D_TJF?@Wdpc7a7SJ22EfSo;(-!Gl zcg38xNMC+TUnDqUqCHDr1CjnLL1?5sOK{>^Tcj^SFXps2)&6Em1DU}7sLvA^Aa$9* z0;$UcCP-Z-utDlFfe})d39OL1Okjr8WyWBK{EZFzcF23PWJX5BKxPbfs1So4LS0+xJEJGpbvgjY@u5z0_{k z%B6aKZMD=Yk3IPHQ11D|m+o%)dgw;Q$@)B(Ym(ubWDJ!85?$Ta=7XQO?8QBjN#3bw zJyNLS&{-h`#CVRBxJmIT1(346l7Haah?jLo)yyn&xw4m~{6h*A5g3Zecr}08F}jlTUx(m}ZQa zP`CMI#G8N~E9XsO_9i}^08IG3iC#{ax-{076TO`9LVqKf1BC;{P$>x|C4nAAPboh0 zYCbB{bI5^-)iJKIkQUEFZj$W;IC4r!*m<~N*z;@^f1I9y77_c6C!hreE6XvrY?p*rdc&W9q z(%jq_96C9_!aIlFs5M6O%UjLy{8?K{;}dl+n^4_SZI!nNg4HeS3)Gp9bM|=NM|%W5 z^P>xVg0C2UvhEaN$Ww%dN&#DfZi4BaTi;5YwhdmiAQumJgF{A0@h0m?Ng64Br6iLS z@6w?jo9Qmj86gGTu;hvsT15-eR{NC#NaxaT1afJDmVb5)gU~QAckxvPtnK>p%$!idu@;uasnxQlRc~ z$OtLkWF09v3g z+KWqj+35se!f#U4_VU!(TU&pkrV`LT<}isF6Q54{!bGi!#d{}b4wX?{+!gw{ceTB| z|D`5CN;mF$?|bG~V~o2J2#T6Rah!za#7-vyCdEaAl^8KeV8%;LB`)EWJWPB#0hn~p z6TbQDC;v1?OcGc^QNzSlP?CpB__6mPPQl%$d3S4uKTDa@sPiZ@wDO43O2D_`NQVMg46mPQ5rzFm&_>_`NQic+E7+5b$>`nuR?jtr-N-{|qPT;u|HKlly zbv`9&KE7nVZeas3A3~y{^piqPHkodOAF$^L{@WVmKMZcJl33< zr3LZ7c%?ZrOAF$^Fidl1mKMa{^rJX4|2b4mVU{t7|1f8E-vO=+jy1=tgB!h9ecScY z)@p08fr~^ByaT*2zpP$rw`=86J-@nMYII)&e()Q>+{>nW{`?BVH-JMV@~DZrLP^L% ziIGBtwAO8VzlC`FSBDPj3W+Sx0b)#d&kA?XiVht>F_B8(9u1=yX}Qz zXY?prdym7bgjH0duPq!ZZXBfB92*=PHu?Z)bi1*}4`=q{DritA5Ho-VD0VtDVnoBI zkL00|z>rGCO8FePYfxkx6m9eY(C7wKt1h~kXmwZd8q|4^40sJtcynGOMz7)12SB46 zP;mtg6*LlL|mPG<0;t zONUP(fR1je>g!MX=I7$_9n@L#EJRk|=!e^OaN7qPQty~*T||c)dfeKx{~bqet6lipy?YGk$mnkEQDMP%fToyJk>jiKD#`J! zB_%n&U9yrKUkOl24oFg}tOF31sK@~gP%3gjd#=vL6VzCA1ObbnBM6uT9YMe*=m-Kv zK}Qg<3Oa&-SW9qQP+@1UE|P>u$?&UXDc7qBYO7^FBw2YU7wISh(Mg- zlDAy))&L@(b_9r4<_;qun$Wk&%q^iE%?d?FbMZ_oG)5Zpa{(qYfBm4kF*C+ErB8ii$xj=hF^=s9Tl4=i_f9 zu8`A@<*2J2#ujEPmIH}o=Z82yq#w)ql#_6RsP17~FQl&|kbQRqC50fLauQAu6~|@G z2l)a87sjz+oW6PUDJS6sQTIt?%?E*+w(~(r@j*W2B%GiGCgk+ZoG;_!d{A) zpl+w;QG5TG$Z6|m5~wrAS@S`lXUF-Vr1&78asWYmF3upKOTBgUHv8<@Q7g+7tP-(-%bgMWO^c zBKq)W2`4ClNdRA8D6q{d>K{Mq&F-WnF% zrwyI3{d%y{NUdILcWUMQnrf-tY}6W~`SUhaTlsah@oJcR!`|_SD-7=q+Y~s+4>fOh zV@t>zTjJ3V*f(}xTzuTN;s<+I(M(XHZV$^KN+57^l{BuBW`dGWKLDlewY0BZN8CEm zvarrfTZb)4lP1%4J}IP6ibXwrab)6>sDq2PAeopvu;+9xDM2pDryl{O+y2J5BomaV zdnhu95=bsMloEuJPd@;qRo+JFCBJ?Bk9u#EETTKARmzqmi|_&D%N$4?NU^8~aMVpW z>~{8^A4mpA)G@5=aRjOuj-%voZs07u>IHZd^sWKkCNn%eCh!l z4S8|hJI*@byUB>_sJmXW=Z`?=lk-Q(@kc)O0FH(|j^2Lds!#Xk$RzbmowBzD$)t(| z`d}SN97*x$2cX0^HkhE4K%ZoyDxcMJ!5tgi#iJhqC2p5wf>Hw0-zKW^C0HFwi9*Sz zAAnLf0eB#JIh2xwl3zanrEbUFpPz6l(Q#*jQUY7V)enXs6xFLiA*TVHFDjaX2VwD`y%l(G;D+Py8rf*qL2&I0sqn8*co%0wtPr81EVOqGdTV5>~z0%K(&7g#G3xxieR$c@Qf`SbJHz1M6*@UZxl{JD+h=6ba< zTFn<5`4!btt9z^2jJwR3TVu}I<9UWxnbp*$>TZe&c~gv;O2F2!n}T`Ghc<5Lu{GPR znA2i}YMROhBOpR>DN8P8sl|v-C4iA``r?MiUr*dbro{+#9yl9}fGEo`5-%e@l>kNx zo*(&5dB5kx=SQfszS&>|?5rIl@iO9531B3CKSG`3%?2YNE8-Z5ml2;z63Ga49x5A* zfDE2vBwj{*DoG?GRLfB|7y;R4$4I=4_*9ZeMyPh8Y%l^UZjO<78S$y4FGenmu4q8C z3O(nZkM?S520BNj*`% z6k&F2#lc%BPslH=cyC>`ORZX|o?q)#ipyg&>cmv?@};|PGb|JbuVqm4bXDSzRbm@` zNMXV;ZzIZI8fX-#H>{p;K-$sp=IPK#5E_1cBo7U0i^i=xMD1bz|+i1kN z(eUXbacB%tt=Z{E!<(n`8cFgRK79Z*x=q=y-ZXGPJR1$_hA&$(G$LDYJk$9M&S#jY z18~vZXso=E*=T5Nz^Mao>E{BJ?;RKMap6-(;;^88J`C1JwT)H-Nh0jDCfD3+q zq0%vxKrl3Ftp!I$XfkZ{0nq4PRr>dDf8fY?Tn2TvGyPj{&|~hl-tld{ed+*Q#OpGK zsP|NDn?f{9ls8XT#Yj>W!>11cjTqZP66iBj(C}F)xBq7M-!}RX(1@=sWQeL{WWa0q za_0_>Skdt51EA5(+5YOJHT(D8XlU%ghp3|&89>8#6u_YoD;hq105rOLjrhC<_a3S3 zCS1dT`|r5nLq2WDoO57Ziel+|~xjHoBM8mI- z}JPFV`LBf7t<-t$H!@QuRDHv;4r&zLVitiq}i#J+o5hbevIXFxX+4odak)NWOJ#i z8o)Mld-~=5%#)r`u7B=Q%G9oZ5M8=7B$tMaa{c?%0M>su&2rVdU-+XQX$|k$=ch$z zAej>G-3MEm|7S)yJ7@hvT7OL~fW_Z6{^KuVjDLiSpDNP6)YSGz_}Rh6$eDi5^lNGX zEdH+LUwzYWe_(3)slK+z7Va!im#FE;@?@NNa?i#kcCc>EC zrv{dMHU3oNeQJ$AXlrq+pIiMAMm|0*fW_Yp`|r8MIMq*e z*Qa*zgPuFL_{|r;PYc50Ur9AN{6k-5P@;wYu+8tfr-uW2WOOa`5gHj^F};|J@Yj8r zi%5Unw?-oUbziW%c*K_OQ)U@hjN{CPlYlt1so zHM?z}n*)1Ar{``uM z_HZBbqjMjZ+Rfdy425hNicd#Epf!H@TRzldiF)01W>`_JrWwTwcmcOV;dUrySow6+ z7gn3BEH=}@YD?8tv#C2@71CG5rlP)>GILc_mE0C8GcyRbxz1H3%vJex)E8D(u8Qh6 zx24L;4uTwpgNlPHHWdj>%|D*mRar4bwXtWDDUj`TOeKpcpNdk!6xAJ{O{PF5-!YXe zrhF<&1yfYheKwi$m1ta3GTW4F75n*AlnSO2n5D2%m~R8@m`WN`J{19`_P@isS~K@Q zhxD#eS?#B&BLdm$!e%bVD8UMWxmXk3jo;wFg_in11svRL`J<8=+AaBAQuCgVdc|NUszdf ztf=F6wp3XqD?szTgNlPHHWdj>C9C+GKu=|*FrN{0@gE!i#ipXZm>Shpn%j$#s>hyNA z2Hl2cyAzP_G?NYNnVD>0(9C25i)JPpm^3rlKAV=^!>^sd;ZI>WzpB)kYuybMFwd?Z+N+c7nkrdp~1GDnb71{5TI$5w`qFj7k+V9QZ`N0QPvF7Gz6+IE(MXG z6og+vfTry|*VL<+CYvVe{t{c5OoDPyPIN4BEXAXrLCNLBlZ}#3Kgmalx{@S= zdnM2|?)FLv+AI0=(-%r6)ukcoCK6kgOo|uY#9ehM$<-ylf&`klCL$Y}66l$1X!4mp z*ZGjF&Ig}@1ey}m`7o5goVX25zND+8DOoi66qEv*sJjue80mq-fNtBwZJTV&olil@ zr>QVk_f6i!-M%Tw`zF7F08QPaOy_OcYal6IdDLB4S@ccdItJ&PlH{9w3Ia5Bdt86^ z$#WCaJexqm_2q@GHN~VLKvUN@9r1~0mU{aplOERux+aslt3#E$EadqPAlviF_|8)dxxV%zBe}kMn2}r%R++44&>U_e7i`IFwi+N$X(Jd|EE~bVWZ4J? zHp@mZFj_W(fz`4R49u2|U|_dw1OvlmBRDS0O*k`%2du{0%fp-Vn z%WWg2#>NHXm3fPITMjlSFTefBZ9lKwx28wzKd^8hy6As>;E4?7i;YgJS(z+@i&p(r z#b^cimRKIhZ00Q8M zB>98^o~!kGZM53xT3>zitg(fsOGbEIA!Z|Be0nqONjwaR1O-}dwc z$`ZlCP?iYRh_XbmT$Cl+uN>ezAB!1id11ja{Oq`mb)TVCZ^qVkBgNPFv3u(v)1d+SrMw>|}X z>r=3|J_UR0Q?R!_!}hVyuzl<^Y;S#r?XAzSz4aNkw?4!6)@Rt>`V8A!pJ99Jvj|w9 zivw@=7Uj>VZf!Oy`DOJ|yY0PxTHl%L>6z$JlLhRJnk-;w)MNn*qb3WO7ByMGrl`pR z#zajPup(-*fce;{#xgJ*dcr`!q9+WP4n1MOcIXKM#zRjSupWBCfcel92JA=hO3VrQ zrIk(9R=d=y@$OPX3|Q_OV!%Sz5CfLDh8VE8HN=2rts%x}K?7f?F&p3pHw1Qr1fIYH zs>lJ(Pel$e1S)cXB~XzAOo56VU<*{_0Arvc2Ur6YIlvsK$T6}9sEnX7A5=R~kz-^J zP?bPsK7iBUw|<eSjBO54vMOn18U%z5Yw1)9?r3e38`P+;Tr zg~qawfc4j(DR2e)LV;h<7Ydw2@N2#U8wU_V8b}2JrGZpnlnkT-t7IS*m?Z-RHK?<<% z3Q~aP6{G+guOP)n`@qgC%mqYr3Q~ZrSC9gty5MaNy{QBnI1Mqt=4*%n_-lv({A-8- z20%j$umBojfC0>7m#5=64vB7r~C76~HUR^TdSFfNC{rdM4i z@OtVpfdNvN2`rGhOkjf4Wda+dE)y6bb(z2lsmlarNL^+Ob_i{|wdaZU;Oa7CutR8f zu0GEg>=4?wtIsnAJA`)h>hp}j4xzoi_B_$XUtK1Mhk~Ei8Ctl@Wg)BUrAC9f7gj|M zuyHDKKzyYl2iP(dIUtTwkppa&iX0FxsmKAgNktBbi$(+Q&lpa@rc6f=usk|~fJx91 z1Z;wiAYc@91Ocm{BM6uU9YMe@=m-LaVQX+;hw@YIG{4lll544VI@P;}-Al8s8>{Bm zRK3eaE^M~ym7YC5R-yrGR-yrYR-yrqR-yr+R-ys3R-ysLR-$9%8pHsXTA36O7MRHf zk%F0QV9?BD1B+%R8<;dR*}$gNgRk>K{@g}$bG=#_?Rv_IR<-RtT;<*I&}{VdmWtBA zdQy}IR*<4Jux1pc*{%}c>=fq){F|aQ;L;SO0WUVLNH=g?n&N=3(i8{GiKaMUPc+2= zgQ6)8SQJfhz@%u312#od955=H;%u!7Dj6!y4pj>^#o1aFR3p@!ovl?tl|jwf*;*A; z57eBUtyMu)K*iai0$_+hY&iea$x@?J+px8pD?H!p%WQSL8}4eItr{C2RvNG=igN>j zilQ`NQ52nI#^N2P+v^ZO<64Azoo}H~#iD=_P&(7AW zM6~gtXJ>0wBHH+{vqKvnMVpD9Obq_O%+R9kw}h8QYV}&XQ!97dG1^V4^HEP2FkpJZ zKv1M73>YsxVIUyV69x>Io-hy$=?McyOHUXGghFiQSg1i+Nl_ZuTq#NeMnq8>up)}m zfEiJg2JDEUG+;;+r2$K#C=HkrMQNtC1nsHx=7zRZiqcGN3EEI8&JEa-$>5y~CrDvr z{ygtV!Th?~c(rHV-a;(Uq=i`E9xTMhq+8%HEM^NlhlN<+LM+4rUt%E^IF(Hn<^qJO zCUQa0Y9bdHDigWDQklpFrpiPvuvI2pII%}+FtW_?^kQ|gaON>Ck&V)Jz>BG=?MddM^6~AHhRKInlTSx*?S$a=znLDmxn z?6ICOV2<^K0c)%$EE;3XLLE(D$4ZMPuw$hGBND>Vu+o4PQIrPEh@v!LM--(2L!u}R zSQ15Pz?3LTGqoiV1$HEF5e0UvG*erG0z17`QZTh8sEhUD;8mNG^DC;QR<}7Yzr58P z&!4rW)D2#HCnMElffb@A3-}r}SzvXj$pU^xO&0JnYO;WjQIiEcjG8RqUqS@NB+DRF z)e{CHRXt(Abm$2KwnI-CFdllsfc4N52F!<^FknCQgaHGhC(P1OpzFML{@TMgS7~bIToD~*Mqco$Fer2;# zt@lo`Xp01#X^RBBX^RBhX^RB>X^R9LYKsIsYKx44OR%vD{`O5kQvfnF>M}u)6g=Hk z81fFA*BTp|t#WnTyVZEw0QQU})MNo8rzQ(nH#J$ntf|QY_DoF{Fl1`7fCW>N1x%Nk zEMT*O??G6Wfw9sP2CS5xFkm|LgaO;p47`;q6!NP|oynH>{K|?_V|23QT`|A9)ht)r zZLh-D(+KMd1fNq^p!p{SHd%Luz%c6y1lCzsATZIo0)d^@6$p&=gf95NVk=7oW;^&v zvBI$5xGitC@?N{`_++D2=AO_nkqb~eP z-VBo&qdpBA!N77oJGhhZ?$@h&S<4CP9kxh+Uwiz(z(zWLU`HeUePAvl{e55+BmI3~ z{388*VB?+{eEpx0U(+0Ede_DFu1iMv>tO9i`0HThNBHYt-ADNAVAV(X>tM}C`0HTB zNBHZy>m4}mXGKU8*jPmQ^I-cB<pr2Q*9L%jCIEA^A_*6yqH@;|46M-u1{9HV|`=QQe~H4;D<5& z3+-B^I#Ozd`jM%*RH_?Fll9K7b2*!y%hZotGFkPWH)u9SN{x+ujmGz7FMC)qxA%C* zyWrOQ&g+Vki|5V%%=eGncHExNf6m^EA66ThZRh^tBX1OjaLDv$c>KhS&u}_BgblUPN$*Yc z!QpbLQQ5kny=ioo&r$!z8$R?fze;AvrdRot>_sEBPQFyBR9lhAe@f7~mulms1s6}u zHuHUd%FO(XFp}5To1@JI6OI1G7Xyp0PmGoFBh^kRKhbJ7H}v@YUq1a>LjB8wj>6jq zcf6Y!+n%|tHX|8A;1w{^szoC?@Y*kzTHee{txh|VnFs#()h%8guGX$DYc@6%;vd4I z_=j*pBtxJb?359lQpEPW&`<)#Fr0ufoEXU%=)i4C7aL0Bwffd*ln~e_c!P*=4TKgS zjWIYUs52O?Hpi=-miLZUyIgHlBhi2GL9}X>{CH&|zhgl}GKe#Seu*(JAgfjyUNd1d z3$c#@M6(w97(g_(D6n&`*BaIQXlc9}jayn8bO}r}>$Pr;Ad&?b*a}XxygskSXe7de zPko2*Xjb0Jz#o3H)t>Y=CDl!(?q%WI*FYrSA#i-w+F2&bPt?jMMKXlI>!3YW+F0Eh zjV8_ts)_dITBkhLy)bEGG~1KFfmx^OUC>l+PE_-aYD7``DePCcxII{Ev^RSj*O>+h!<--oknVs@Fz#4oV(LH=9*IPh2K{ zon9uJR?is8;sE;Tj-)=>=&@YGjf2kYd0%9_G@Dk!e~TdCw!NF)%44%@WY3>p@lj!I z{AgMu?U@=`!)l~n>QuK3mMWV{jdHb;pQx7_+wY#6+H1BKc>nt>b|svmR)-wkjt`dC z9UN``cYeE3?_6bDc}z6xJICauD~G+EQDPAXitL^P>QTYo6T=Cn-RVy{} z^{Lwybr)5@;d#z<`;Fz5skdm9zur@WX|>$+ti?pf>f_O_cgN~4wl=uFjv7yOhecNPvbK3Nna8&2#uJ5gn1bhhuo+MFssX{eE| zh0GYzbWtjSgDR*fZ&f&_IJ1DI@x8(hZAMvZ_Mxq~LJV{EpAK`jESZrl`iF2|ZP#K~ zB8Sy$jm7k}dRf$P!wMPZ>{2e%&|$#T%~S+ET}0CuQ%DOxRa(;+QrOQlRdAbqOfSDy zvSIIECSZ3V9mi3K^iz zDDTZavM1djuKwKg>fd(sjMX17p|qh{r*k&%`EPMRPo5so3ufjbf|`g)?~<#k@|Xgu zuiKT;Eof}*mWHRcKfUT1;l5Tt_0{Tpf->BI0vd%{qtj|Cf1`jY%8iZY=6baC@|w_Hy&d8eYR&@7?x*otHwX;~3CacZXMP zjMi)IG3D(kTzTo(q5 zp<55S88~nuG~CAvsM6{9S=ast40i(dA%y!`-Pu1@hGM`l#_+{%pVqTr*Fo(YO>P;^pm+U^&+ zE~r|s*ScMC`K8@IsTFm@T<~=z-{{VaYUN4HCkt`2)$Dhc zGef&=k{MbxxHhSqL!%oZopE|#{Xgj1I%Xu}FJitGy>onEB)eQ77Tq{PTpKmpwIj^H z?zx5eK=KVxft?e?B*Hh|#97Thu{%AZl)+VrB-2ZHDw z1EY-T`~%-c7}Iruzt{=+r4=uHX_s0xFItX#KqG+%bfS0Hc%$8Ed68%*zt#(r+Z#&T z_de4qrM}n78=3D#CorcGJSo4To4fSB)pkeNrUrA*$C<$d_sl2fm)BbDPX4@>cZ*`kdp$ zYlrL?rw!Pa+@{1On)+xFmc@6V+>7RZ9;tOhRJ2d1Alh-xtG**#%mhdsL&e8uO*ul>>ozV^&0Zevt7 zW5=3Ci=>AK3^(q&&A4ate&GgAC$4%?>D@uny$i(KE_S@VSognX@zmkQx*@8MynU=A z&2B29d+Wet-D~yI88Tgir9N-D*{bS{GcsSY{nKu!ZSlVA=31pQrZe1dFZr>m34D?x%B33W8M!lT8bwQCykVOs&j^ZJ3$;4qC+Ak(!t}RK)>CW5gL%hx>rh7;#?BHyrZsQmy*m zr`0?oC_;M<>1QrOJoAAk-#! zCeUlE<2Az7g!_o$5(?4~6}25r=(EndU|B`g(YV6y@ThON^0A`jdC#=v8CxbVd@SG@( zR;Ql?6Xf6h>6g`d%coB^ub+M(ZKS(TFSjPgM`jyB_{Y5H%UO3E8(Z%^t?4}`(rj+*{ztplmwEf+Uh>%<%GtY^3!xmZS}}HUY0LWd%& z#XEZRr+8qQ?0iRW*EjG&%m*yCB?GTmTFfnGZHRX{^^{uVtD(1f@4o&0>NAfLI{uzA z)ur{Y7tD;JXj@ipHQVi0b;E2+^}-(>xo!Lz=N>Bzp!21B06J8fKCxF6QV4$LWu<^I6SZzKO|t01me6 z-MC{+PiF1Zvh!B0TT?va{B^}uXUoWSAE?~t3(^CXyZ?3SHr&U~Of1q(+f&yE9X;b- zy{V^}O4wq9#y4vuyVQ-eEm-HLpvKH6;6I%DIrG)i}e^xpT2 zL;BZgyZ-9^XT+#9ZBs*9UK?H78XxL0n6D1JM)*FvzjI1=kC6?tF#!{-ay52uu zn#_CeYE9dc*;Us(P#W5yQzP4b=E&&F2WO7V8`h4$>hZny%SnS8s7@>I#WRN$`h!hW zDs_5cg&P~a8K3@S-npe_D?i#QO^oGt#Juzn?_Di^zPC(2g|c*F#@eSZm6#slWcSoi zy;kCsx?n;Y)z+z%c0lN)$l%CwBvv4!9=+6(MKL17+?3TiS4fxb84OcCh@z?l5M8% z$DJP6WNXyRlx@`<6n#>4YEa!nJCmL|^CR8UbNNoU?mV7pPNdc*M^`RddZzZiYL(3CDKUi9t(nZLN%JqO9C@`fA5mH&|GsOPDx zXP8+!l{hu9YQ5(5UT?p?BwuQ8qzziQ;nAC*>ESgd$E&TK!y|*Ua6_XrcT+>#alvgQ;B`GtdX}x7_cD&%Weo%J{zn`=V|fLR&nD{mv1Y6LqE%`8)lt;<`qEZuXaftUmj1j8`W@Q#P_|t6 zHh0xlcLPNY?cMK?hSr+;X#2lv#>nVagdHPj4|*55bxM;h?+Uei*L`}&<~?hxHoU-e zwo9Gv<(UDU=F7e#eYt^GPbu|b)onAr9bGw}`gU#4&UPbQ?;efmhs(NG_Lpnkc8~go z;RZ@~#mqKP&)C(;)Y0+35I^T%rel!R6Ei+%;8bM0l^+`?-TbZX*Gd-UnkhIAndwgh z2bS&0k+v7KbuX>VdymR^o4)QgYt*|@yhn5tYIuPSUVF0L_DcHsQlm89tdq9*od1-* z+`w_uY@FY>ukUp4DCjcyHkHbgljA#$$%e_sjw^YX2d#7Dxai(K>%Cb`j&u@F#>%n%}+wBgXS#59Z zc>nBPvGl+n)8(i3;G%_#cGPZ9o$qb(niVgm%w|%U=;4{07>d<1N zQY#(3wo__UyuAL=-CJL~b6UJtt+vyc&3n)a+pCy6C+HAi?R?)Q1N&WC%&iFjV>6mz zcK+b1@IUD5i`n^;{}A{my%#;2liqa+4=%3n{J99fpP=MBZeH-2UHV5PCC{|Afu}j$ zzJ^pGjO}YbW*6YV@24=Cw|d4x&Wpdf=Z3{X3u6#j*Qp5`dg*4vzui5f(Iks%QHkTKlS8;$q_$k`qM7l zb;MpX-uFBsngzK1I zu2a2>Q|)B4tx<^6x0D&zF6c zUZ1j)IwPoY+9j^+HeO4svJ225A3E~Or1kvRG(ZzG09qb&fIAvxc6P9ib-Kp~W*g-1 zTc>=M8zhc39XLe%{{V@zfpdCLRqyCh?ychsKm5IZ_Y-Fa9ow`M_pWs9D0=R50}pIw zK>Xmr1^>ONeZ-9ODLY+2bRSMIgQp&`kW}t~?52aydbY19jP$5)(FR+HCcS_n_(*5d zk1*1YEEpue>F!^c+JPTE!*ExFF3jo3XU6kOQ(dg>?Pn8C*yU`3WY@c)`=?&@P&RZI zuHyAhFGTLji@7na&+?$N3_IbwAgHhh4E6L0yqU)ucK&qubvUKZ2iu(p6AtMK6%7{y zAiaypC2x4>!NIdIYU>J&vKvXh`q#AO6kzVla-tP0jCAQ)bdEzFsgmj3^;p+zrt??t z4ZJ2&Wh+k{G_{pW2Cnxq7tX#hg#s=HKnesrcJ15yKSYNC@*Y%)$CJoVrg95C2O+zY zDm+hec($evPwx!2`iMhk+i)v|A4q`$o{#t{y|m>VmJ*&PlX!mSd)aoECp#X~%_CBv zz~favrFoQ-DM~z^Lh^Xpz4RP}5Ij}x@)U<>y75E`6!84*pXsG7SI{d#lR{KbULfgw z^B#MUc{w4qr~~Jy(noPSO`A~GPJaTL_4(|&9*~p!I-awzkYw}l!`L~PvZ=F8@xF?K z(*{mM1qyWD`WSX)&(eCP<2Z*&IJYjOryyjq(&RaZ9h^o{t&ZnJ`V-hZ>tuRi%brt* z%|#@ee_Xal&?;J*U|i&2Of`&1e*%neJCBA@@G@SdRR>14^JDRPb_%$oFHK~g;K)oh znMi*EnGYRhm-Q?gOdT@W#)9{^=qU)<Hmg_A3)|Xj zvP}k$co{tf(8&H-UoU`XX`Pr)1ScR=U&S$Y$2(s7SOT{23K9n&Ol0D zrO9ogYPCZ%J!m5R31}{PC%v$RcuArt9OAoG@wvCDcu zhz30}&yT96{=jP_v$d_)zCe#z==(WJ<;0Ip1T+LQKRU7shO$uMpqSH{7s`uj$wHYP zEm>&iswE5Wq;&L!LJO^VA!Dg_dJnfE=#*C1Kcn zC<(*%LrEAmAWFh4Er`&}6#iBx@9mTc@A6-FMpr5HKt@=I#jn|oZu%ACS}zt?TI0xZ z3DY?E;N(stfHs~!NKZrVR#DOha<+mtKC+yihWVmGZd{ngzz91HIk~H(3=|&-vv~N6 z*jdbSg-N8g9t^a54*!P_VebW*B`vTqd|F^(ep$WLZr94*73QnGd)B+JdwI8lR9od* zySn3k5VB+Iy?4^d0^uZuPENg{`*B>yvrWgdxf%hm-Q(|<{x`eKaeCT|9O|kol0BMm z{Qc}ir0y3TUYp8a#{p{tti2uuV!uByk31hYi;LA|c8K(`jRX6gN*b9(dkI%1)2Nc^ zGK=&mFnj9Z^Mbbjt2_Nh(J%^r3hM2{z2JYBWkoV5GqY=9MSENUQ**)v@LXRsnW{((t zWHUV#p>Z{heinD^IA+tvEYhREZ0#TEg&vomWse>nepTS~SsEe5oo)`&v_XpWC_wtN z*RhbsRDNZTAYOePI}xcWl}2K5yQm{JZNwry3dBD9UGzeik0h#gII)GDKio{uLXJXnQp}J*7>3;`o>VQRR6oCEB@$@p6Z+BI7scaeNO()Vb5t3yo^sp%V z?|@Amut<#puos`kE_1Q$s){Rnx#Z#hz|O<=(5tF{#nb6S5+jcVJOcMrD2stqD+#qNVGLp z7Kws;WsxX6Q5K0krEXU_y6~%*)0K(skFHE?fOKVI3#2O(n;>18*aqp!#70P0CbmMl zGLactGI0N8#oReTAeu3~(z2?51J!A9EFVeHK4dc!2KhBpp+_Qi?(yf4d z`x`V6!lAP`aA!~G9Q#9hb{WSVrz#!pS->6XR>1wWqTRXzhjw?aJzAqZ+PngXR4cH4|8T!J#K1bcZ$JNR zdUCe=c5Bwr*3Pk>3D%Kn1=f!`kH$eh&t&b_*-iUbp3Y7VR~Iwl)zPlbk)8?Ck!l6f zuNmnNh8TQ0yIVhG6FaqxchI2?og+OHq$AY|q`&h;G!DXcHU`$&n?TyHpeJXWnX``H zaTmKw%qb-1lAv0F_0wKU;~-!0V;#SeH<2M&#!X#p1kLe^lSvreYncSC+1N)I{Q$}nGXY% zc4HU@syf9m43rX&VHhZ17sD`cmdJdGqD@Z>UjxNsF$@EF4sRId2X*iTF81RP?5O$o z5DcmLI0Q>-J`TZ@nvVn7(m8yQ$tB?r>lgCpHkzC3)yio1-jaz{wOws=N}XD>5$^7i z_aC-bGBUIsi*MqtEU>w*tkL~-NlP2HD zF>J$dT-p>Me)ke~(Z>rhd%4KztJ(Qjwl-QQ9d8y~Mm743YFyeB5We>r?2?!KmXRZ7 zj~#uuOixJ2qv{k%EcA}I+K%FMQjD}IP+V)$i(aS<8d1z1DEiWi*$HLM6XP_MV>q1* zBW(%{Z+&&(td9}2?8%@fype@=Kxmsxuj|EGU59Ntu|?Vxu)XhX^r9Dzz8X=?p8L7( zKj;Z%j$%|3ab1HNU4s_eA*4-#;>DltchMVB%%14^%Gc-#2|I%H`eNLk;e2sAeKFFe zK=BnnU>7~!H5xI@p42(#E_OP$cU@c9Zrt1C7)~d{NSgw~2mhL0^g{S)L@|3LXX``u zCM(?oLYs1WJuz+#bQGtPVx&!h;>SLG@1QMAM%c24ZN7Q<+{G5R(mHI@i7nEmfbE-d zVX!r5on))Os}|7Inr?dyn(t7#(B-Z)a#t}aQ($(tQ|ToiYrtG!TPe4!q$ea~)KeQU zd*8^lQf8)=5^7VR_^k8kMK513WzY~^V0$Sa{w#JvLT_k#A$HA_R<92mWFoxET; z#2+7H*ZhFcduX!>F0kSBl@siQa6&!347+GLOAI4z3Jf2+t-mF2L@?V(dG|}$>156g zqux8WA=cOsTexAQO#$JXu431`R5vw(m~E^q{~J4@thr&-i02qiE5k^e0>ca5$}V}q zZy6!Xc2)lHMs_~ZIji)>(x?y5L7Y~Ikv0X0ulfkP=&|22VwmlzeENT}(-{z+bxAM7 zs0+_AoK}XBHU)HrM(AWw^R3qoh zwoyLqm-K{$GIx3@MvZrl;-S}+Bc<1M7&d6jLq!OOum-|0DN`W$xP$je=5dB$wyE;> zN6^p;+jt#l4N8aJH}ZGzIBGShjZ2#Xwx2wfUi8AA(c1(YcwV*+{L&|~^TGLa9fS=` zH7XoBf;9+^Ntpt`r<}wtcq!Dh0hp}{+3Wko^lroEyDKL9S zjb8G?oijFWmo5Ij;}Uv4LLDr%aWkqGI)qb-FjA&~aBC~Q+DZ)sZ0>a0>h+gut7amc0z>9m6`ren*6A;|)aVl?88|-$`sqbRWKrlM0 zY9bidwhd;CllumOQ5MVOQ={FtiD0yaFcFLb9}~gITo-c&b;==Jb+(XyC}&*lbuKHXBxm&4v|X zvtfnUY*-;S8&-(Th81G7VX*se$-w=W6?5kV-CD1&HOlqLO0_*WRxMQ)jCIEA^A_*B zWMIEbi@6oyf2>`rR7b3?Tk4}QH=KX!WU0}qZP;3CjONd8l&h^ysn)1n3_aLXkp;;Xb1B0N=<#Yy)vz+7u{$%_r$aFC3*XqL}r=&-y$)A=A>Fh2C+l z%sq9HsHZMyn~bz6Q2d0i(u-cGZ5dI_-qrj2Z}&T)^!j3)vvCxslVYSzf#Q$=Jf5*<%_qvY*((Q7rk8lHE5+|i@&$F==lhBFrE0>p#Kh43*G7Jgq^NtJ2;Rs1%$7@lwR^G zFQ(H|Vw)+?criVptQC!fgGSB~r_vEaZ3-0M{#tg?$GHdgj-zomCcO1pb~;jrw~ik+ zXr&a6>^X+h$#8&H%9C%P7k#Yl8wEC+zUfo+goNEedMQQ`mwrV zMIWmtW*aFFx;=1WF1^sAUONYETA@YS6rlaDJ6UL@c+(~u?cFfI1<)7WM^7kot{3&% zIf~OtG18_$@nil%FZy@`Xts0m!+Xysd&&kfM=@%>a}=kQVx&!h;ujuBFZy^XW?Lx_ z&CwIe9L1;?&rzIKijg)2if^r&0D-rZr(&Ow^yY) z(Vn+>w{_1*@4gr{<2j1cN-@%=K=B_}(2G7^U(7aFzU`v+YGsE)=ihO^BKroO9$%4bG|pNz87gz@Vy7WhFxDNSJBT$PHfZ61 z+7u}M@Mq~oA1}phJLPHL?RP@yr5H8fIf~OtG18_$@tt?ki(V+Z*u>9lOUIVq)AKRi z1RF$Ps0Y9?tifvOh^AQT!IZrzIy??|>a}*KR?e@fmfFomtuZ=S-BN9pYwc>=`?vLM zE9}TkbY4R!jp(Ez+Xi0<8G3gbH2Tw^@#s=WL_Yqb^nw=>dHuE2vHY?pQa=4fc0RZ; zqJwWADJ%X1Ieu-tRZPkh0Iq(IUGPHf%!Xgq0sr!s>~vDoj>8g+f>G`Ck9f$u!&IxD638QqMfbc!XEpxAw^owrLZm-SNiX%9c1S4e%1iybVz2t?s z(`F~k9u6&^P0vTjuBX-s6*y-Thx(>hkeR!yX=sG$yIy&Ofr9kgJm+v3+ z6&VwU>mAzRPItfY$|UltUpo>`H93EqMt=*HDG)pM&+LMi@;o*Gvt5*jUdztM(%~8a z?0P7z&g{gbOab7ZUe7N0Szc)G<5ohlwCNoom=0_{O74f!^DG$1Zp2 z>0{3U>lcbx|9ko0=;@gHUwxe;i|zPLC%;IS0>2CPK7j4=7rK*7_+<|*+;Ipyoosnu z)a35yO((rbmjb=HA$GY7jY%f(vd0zfIhmb}R0Px6{OUVj)Zp&$O((ucmjb>YUUEQ) zJ`Me7nLVBGf^%qKGZrnQes%|JI)O#H6o4IGM=yBcW*HNQ%N|X*Ykgq*lU5Ily4M}B z=>!((QULZf8&d%+dl2E-ZF)LFt~{;$qTY4KZ#wx!x)k``?|Jlsm(x=vjk}AvHmdKu zm>$1SpFURQnJWD)EpCL=my7y;iaC9`xc;Rt7v*{_=8Lja7IM+{+CnbcC0NKs!H$Jo zWTg9g7K`jq6&|P3Z#x>o7Rx{|HdzLOvCT3NjE$CoU~IJv1Y@&hAQ;;%1HssE83>Nc za>4UH7A8}8kxKtl3lCBm2#(8g!RM&v@WVNmhTmG9AU&LuKW|gDm0wpIuZDXf}f+H~m!Z8gQFyz~k1ckT*H+H zz}d6p@3@_wS#|-(Nj?X7$^l1e6##Gjn8rYE9x@Hl*)wmsU(++oF5oyN(<$xo#3V=7Z zuo&R;;MRDvC-07YAv-VW+Kmk0ja!=>-YLf$sa3%H7yra!kPN)pgL(h)8g^c`C*e&u z+_-_z;hl24ky-`3_xL!Afsl5xcHitdzOR3Vomhr@T5zAP<2&ViBee>AU-X!R$h7i+ zy#2OpL1io6zZs?{_N3hx?_Ha>c%R(7g~i-nmFh%$-s0WXJwqovx9nC#l~H$MSL4KP z+*%dre&gvh1Tx*0>2mscqT%!CnPoP)CY*M0?mOM?8>&?R{7cWKF^~bbbl+@0%hrq8 ziQ%+D`kgoGdT`D=?amu%RbcyoOIZZut+yrGY$wb0SN1ut^s|jR9vs_gXB%l%V0+zb zSq27#cY?gUnkC$98_QSTK~GF*z)L^fsQbauop!pBRt37Zz5k$~yh1QoKdfX!#V9ZBdrQJ|M*KZ1oE*kOJ~ituuOiRo>*q-My(Ex?zGd5v?|bj+MP56 zLMGYRce5QV_uflSEVC6b)al^pPCMO5s{-9O|1s&IINPvN8#tJ()C=jr^he^Tuffrr zcAAk^1)2}rk7Zz%<6j$fZw=yZrT*e!^h|`p-BBvHQd>4M;0sqQM}Qk-EJuLqLB*Wq z2vFAAb_6K1W;+6u46q#m+OXJ;00mXHBS4Ozm@{p7UpR;^c{3dXwu+`hz-G~O2-q%~ z4gniR(;;BXXgUOJ8cm0QZKLTBuyHgULj2YdHm1$to0t1|1@grF%F@PKV>G|EHa=0W z=FhB_YwcRI(GGXt@^M4#Zo)J-(8tp&^an!z+I7H2-vJwsE`>b6eJ8W)eU|o_OTy;? z*wx8iu>8p-2M28!(8=cY@ooVBO&qc|_bA7tOo7=e&!L$WGG1Pt&45|<&gAn~v-2_a zxduwdJ5=X%qj$P7DN`W$(H8~wQZ~r4XF_i)v&hd-AFHa zxzXIf>$2y<4t^pcM?Tg4va zy7@oo`It6?8+c({!FC9z5@DoF0pWkVnO^ep!48A1FMC>R)u-wC2qr1DUKn?PID}J) zFjA&~@S(TTOJ2B5z-BMZ9?81vYjY=ig5{#575d7@B*cCrZca?r~BwO`8?$_*ea5`NlYS!O*3VqQetSo-2Ej2YQ$V=!utUhB{BrW`w;jO~KGFXUIc zbHXWg!bq6{!B-s5u6U_g*d~rb-LarxanNOabBleG0qe1*hc^ zte?kX0lwudb~e)GM>+uOXFrh29lsj<#-mFC-&d}r7rfB$V8Sok2z}A{^mI&Hu=I6~ zT4x=<>EsvbQsDR4XVD8@$lRLn%XUHEQ=z9Ln4`4%U(`42_)RCjNS6Y?H`mz(A7}W? zc0j+d!_LO`L3e%ki+W}qzUjml=~BRV)pO|uFC=_S+%Mby{I{3T)5%!)j2dPgzv<)` z=~CeL)~gN)dMT6%uxzvQYyO1=RtO)`8ZM(|S;uQSc}2PuczyPp*!3<2m?rqLeb4*7 zo1TskKBSdj)GF)vO((xdmjb^Z{~*2Kg)SwJ-;&0qlH8LLSAK#XzmRUrsXQs6zrjU) zImMj5T+~fqAs5%f^yiCmzZUaFIV%gfXtQl07i|(OXvHDd()!Y9vm;VHddOO8-qh9O}%=t ztl53_V$1Zi7vWyLxcgi5hR5_;vOWTF`U|<9bemHp@#s>>P+ap9cERJ5%wB$?k5ggn z+GHQS82KeT8{2-?K3od_i5$N+{KlkA0pNrFKreZrh0F$F_F(57|3}Zqv?gz$b-aUj zm!l-=aun7cBV`H*zw@DwV9#a?S#}$Q*@K-I?a$7~v`TM4FiwRzf>TK_Ql>!gX$P|_ zUgv@o)+n*(!tOnao{#A^*Z^UansJqSjY@q?$`lZO(-YYxALmAxJs7t2g8pWs(`RF# zaa@{l9ypaA7%5Z0ciFSq1s|^mW>1FwzDCbS*b}4{VO->N2&WQZq)Y+fPhCPUd7(Sh zW*f|&4r^Xc&qrwfOD)2W>d*5 zQl`M{`(IBlc_GrYal7mpv9asv`DCqtgnQN;!l^_UDN{f={~mhD$BHm}PV7%N(en{% z+Nq74ardV~IF$$^WeNztjCt9gyfsf&R4(Ls^EGhQ}7ZM zo7q9i6c9fCNP5W&Udskywp;S|kD=$2u}+NwYO!-&6FnW%yp=w`D8_R9rjuW!OM&0^3+M$OXNwhkyy&!-(bEya|FrsFbRfm?n@)a_ zE(Lz?c`d!*BrCPIiunKpr<34 zqqOpiPNq11)5$N=rNHmsK1?roVZ&&$|78yu-S`=HIvLA1qf;pk-*n=ObSdDw{%h=d zpC#Yi$6Hi#zL|TAYW@%C@e38+$0Y47D*2NLxoUzEeLkc;--7IM*^!9p&Ibu8o}Bi+~gCM3VnzeU9g#uiJz18NAHECa#VW*G>^ zM$14jwps>)vDq>ZjO~_zU~ISy1jl8$;GsSXlPMf5)Bn`MaTo)^aak^*_1Iv>$a2l$ z*SDJC-xoePzoJ@dHENB~{PI?FJb%`fQuh_EcDTp48uzg~JkyL!OaJ@0r+aLk?ukW} zLO$lmzq6}-K&apMmrf4__b(BH?wYXff|37|f7DbV}+1KH(1 zpwbLT&WL?=>mMFXPe-Wjq?KQss&L29qd$hO&o9!Y!0#cCqZfRf-j}^c;xj|^bTY;- z&iFZg)5tH9^cmtOEfylLWp*|Ttex{IBT5KW{NU)-7J@J%DW zNS6Y>xBQx2@3B^Y*`sisKOLI-yPL=@67Fbqz@`ycq)P$V({?+I>`t2Hd3g0kNw(Vi z+kNQSm>xY>Z?8iILYJ)4NLIz7N&()R4`P=)ZY{J3mFZ`<#*Shqld)8laHhk#-W0lC zs7ry~-3HkOA0xeNpX80F9yUiJW#J^0!!?b#255|2zdQxFvVrn_=g%2jU1)6e7LR&Q z96Spuc-NOzSzLw8t4VOowm}{}IcIo9y>f0VtFe{U=N0Kv;PorpSYG4I9D8T8xl5U< zuVSZzgZq@$eT1{?E;vqOa2%lP@y~Ax?4L~hE8FJy&Udr8N@?)40*hMXoV!h@yG6Pb z-0iZD_PgLs_+|SX@BJJ-9pTh*TKPr2agN_~@{4pS@cYJZ&m?m~EJk*ykLuY@e0OHW4#{!^N*^1hJU?Pg@R3w0^*`=_VV3qHnE zeZy?C<2yI7)5%nE*ZV$>-mK7zbScpLv=+PErOtd4@5}Z+-t&BVI;Mw&)Z1}}T`Oc} zS|On>1%5w#W#HtF39#&Dcjdp*zzWfSTEk_x+szKJNS6Yzf4!bw@N&4Uo~vS8A>aEx zdN!uJUG)S9>YsDDs#NBxkSYaupZ&2g-d3Gfwi9yx7id~dqhobiQS+RmHI=j?RSL9z z@|*oGcP~^{hnH=GZ2pLzO}1Pu>YQ`%rV?JHN&()}e?c#IVPj|kFWUq8%ipoHvCJx~ z`(4)nX|>0VMU?`&um2mn*k?KJr|x&z_Q&GhN08lMnL-!!%Q>%`O0SDlDWH46f$U-* zkY6}ZN0;q<{N-WvY=m5!PI6h@Sfk!K2X89jMXD6wedjSp1a0*7V&vy2y!6bSqwgHY z;wfC?@uXsIRnXC|uQkf`$%^+}_jsu@c5!LT`u60+M6=bIw|L(r1N&WC%&F6f%JT+d zafMiYwx~8@Fk6(>H4ux^QwCzu#@awE-ruP&DHPBc%odsAVotraV+c1^>dVD;N?$HE zRQhtUrP7y+O_jb}Y^(I;Vq>K*7h5ZRx!7Fk%ZH`iIrHzs=} z+ykZmonx|Bg_!IWJaTTau41xR-ho!sg08-<;9&=Y?<}<3>(4i)_-Yv0t0e>XUslYW z6Mh#zRxMQ)jCIEA)BE=o;eV`Mt5ipqI5a8b zNUuGIUhHy#Lz`Uov`cMu;Dm2FQNf&;mgfkn zD9CfAM~%{>wpT@(6i~fkBa5m~$ul68J<{^b&FoyH7Kn8ESe%7+J~n+m7HLud_qgZL zi(R;D$bek-c+2lzOixA#7}7~D&Wk#7(?>4Sq(JU9u^r4D0DWJOcjr}fm19I7uEDw1XJsBZ^mQE*&n+F`Z=_40uQXu#48|cL@G>{vR z%N}I8=>zm+GUan|Iopw&K5~&J1#&O{IK9~AjqUpAE0A1g{5hYcM=vAnP4JpqIG3J-S$NJ+ha!UwAh? z5#h2C9ahIxzm0>{23Bh=3bf{aOD}R^Q{g2uG-+iUR`0o=o`_&}(nl-a8M!1z^hpd& zT9Fn7T0i`Ec9F|1HJY%pJ*uyG*rO77eU7G*aXP`Fnl4n476nvSJ)B+Q1A;$NcC2iZ z>Q5g{PsDV~s!1zO!#P^hMJv*xKUV;4ET&rO+CwoUaP7qHV9FuluJlU1C&cC4n0 zRis6M)nzC4yU3MkWm{9{pGHqa2och^WyM7wM{BxhMOqYS{dkdHRa7#{OE^+fMk-RGK=s*0qdjD#aKWuzh{3Z%Z_6Z8_7 z(@KirFPl|b^F?+ZxbT!ZP*F0%ftoTexglyPOwInHNP$6xjUT z|Ius;^$cA$*~j-@w(pS%Z2VQUzxeuk=R{NGM3E8&P}d&JF7N@NTh;^h>5-oB;%*nX z`&fG5g7-f{<#qu@HqpV0Vop&YF3~CqL`4eK8KPXPsz8)7QWc1{zN!LI1g|O(g(a#2 zk$ElVG<$~e<9V7Av8~aRh>eY=L~Ly|C1P`E7f}Ud7Kar<&;mN<0jMom91vq$=w)Wb7Mf98Wnas`P1peFYk6X zTdh>~;*VQav9q$Q$z5vqnHz%G922%(q2FhiQX9b|9=J1f)bo!J7w z+Ygt9j6Mw+t4al??;U4X|13;f1>Wotk2h{TG8tt)tGEnr=pDn+GaRQz1%$_*&mt^1 zFwZMn5zHR_`0K0KIZ3TSI(2(1_2Yb*13JAxBaI3`-}-uX@#9UVAE;<{qaDxb(x^c6TW+Bj zfBZzV=S@!fCOxT45{;^XYn^rVDGHqyO~#tpuX==2heG%67Nf|KdRFYGwI#Mr2wkLHfYtXe|PLTK}R zgv#+4E2oSbNUWse(zlg#RMjcwtfZqHw4HR6FSC=5_V#wt(MH5hItrTXq$2}g%$YSe z;R8Bm;*qtR!)Gp<;l1xreoeL3*wAd1tK;3bQ&&_=twyae8t$;gv&ys=7W&O~0_m3i z2XZ~xHa*#~s8R@>_iM7tUF;E=P+ytssfw#!z)r?Ct<^U8LJnRVc;nHfK<~io=mjrt zeD(Qdk5F9xPI@{*W=_ZV>gyaugRW+*Q8SK5mjb`}53mb9M(@j>miXbP*vUvoK+;Mt z&K3!L=5>LLJo{s6xS6}Bi`|9{jBfm(O0>8hx^U*<% z_jz?9^-LA(Z(nd=`Zy*!xf@O_hITlq4w66RMyWGWCSsh*0)jskeN3s1^!qWme zp|ZNZk#%`gCsIZ&>KPNELwi;YgJS(z+%YR$%svJmTS_gVZ<7Fl5q zI>?S`eHNEcwn-=}t!wr~?fR|sx;EYQDXnYV>f*L~(QozCgDv(v+W69=f^PG4OtpG+ zg?pLY;*NfC$D&Fhy883Hqsho5zKs!k)a;dyIC_q-iaY-tt0`m^sZwC|$vKu)sYYgz zfn<-7eRqhR4YoWfb+fo_)}fn1bdf3rbl-3ayVwVW<`fHb+4EwfOX=B|o)=ROtZ>)4 zgExinB2^0T9=4KQ?$}d#=sqWOqOBlysB;IyK7SEAcPuZLPLZgKMOy>|-JeYx0B>B~iy zY7U=WQ@0+-L<@n=nK`=DDs&cjp^o~5kntBL^wv`%m{2Et>{iIHDs?7XrFwovsWCcP z8m)F;b}Cof?a)UIf47;9=WsBplQ!w&L5)D)$kml?>dNuyQy}?c&tsSUERVlW+K|ki zIDGL-=sB5og>13{a#~SF`V>%p=CSO;j~iw7M9sqo z*%?XgEot`3xaHTOoK}>PJ_VF-Kk=BLXSThNI{dXKh1uH;xidA_oz^d{I*%>cI9}Z9 zVIvzCs%>PWLQFAdBO7Iotz@Irmz8X^SGSUl_7+yMQ7~jB8(HsS&ZzAOpTaQ`jx5_8 zKALh__&(ra|G|gl%~pO{b9{WVQ7f0gU3!C@?>%x2?R$lkoKB>Ch3&_3t7^W_qdM^NhciL)-@O`1L9<{<s4U8S~6g)A_RtD27Sv@?$M zDlmTk>vQa}2B8(&oN@NV##Qgg1#PRz0@S!&z(JjMsF7X;sE_(E4YlCi%%NtFZhY&O zISDoH!*Nij9crXk0qU`D(ohQ}VRNY2LmY?wh@P5Yud?8{ale~mJnf7ly$X!~;#Yya zn>p0%agJyIFAH^svuL=F)G?iQrjcF+rq3QYHt06gW7@Rul&xleXa2ElvOu^eO~+50 zHYmDscBy64loq>M1)^Vk7>hwN3SR}b>-2=jvJ;cuNz&olw6DSYTCVHV2JraxDiFSV zK8(*cZBQw&ZKm%#nT9&MZKmGWa!_XxYNS^I>PwfdsXJ(osH{GP6?nHN3MB}hX>}nMNKYt^Q zK|%m$yH5AoOwY_TPB&c%s5{XCo_fHMS_Qzrb_I)pP&Tqop0h2d+g`>_%=V^2Q@&AW zqT@UDd?U3Ad=I^j#XwA+o5tsCx9R_0%g#$`me<*Wo8pan6dm5F#~Z0t!25G=qcM<^ zY^H#-ou@DVkH9s73=oYP6dlp2CmN|$AbRP|G|>s!akFiw^FK|`Oz5c10N|)Q(E*-% zz>!)7z;FKoi$O9Xa<&2WEnj2jHIUte8fsK@c&8q3q*ej%`t39ZaztKNzW0y2_w)VqP6U|m<-r{|i4D5GlF=x8RMja2v zLqG*t<00Tmps6BIQrq||puC&$5KwNwcnD}SV>|>DZW#{&xrAcQv{PdUx6ay*0ozC0 zF<=8}I|ghaZO4F3r0p26jkFyDHj=huz*f?B4A@NCjv)bdQaEB@`W=LhXxlL$J2{6R znvwadlMUUV^HJP!0Gs0gX6h5tAD91y zUe-e2tk+Pa0w&wszx*MOVNczfHutM&7I_4huhhs_nyF6!bN_wlWo-;5+mipuN6<4! zKbUws>29)!{w50*Fp>HMFkkU#dRYr~GcAwFw$eZ4cyw1*D{f_5%Nrq^?&aDTzbetE9*fj{huls zJKV>lA_^B>R79aXWHF~A3guRmM4`>2k|-1oD~Uo*UP%mpxt6>Vcpi&_b%9fEnmD>O^DyUKxaf@`LWkBAMqv{ zNug4%vjM59d=tl{4U<;d15obeL3!8P=|wHX@=8>)j`DpsCZw9Bp-8;>aA|<((*Rc5 z6EG})gvC%ynQOR4wl{MB&t(`wye)SSrUOExJpsZyZVSAMJ?E#eNo28m`?pxDA>|Hr zB2tBY22!JINY-dbHd3Cz;KUsXVUX=U-s2vcL7`1EH4NgUv2%pcJ3<8pk@5rvZ@>M( zkN2CXQSgdv+wlu_dn|iUN7&M)hDcmnaYUwqNTfW0$PEvrm$Y0YRzQ+%IX+~6cKmpI zn;IZ-1=0bT3Lug41R#I>NOnO>J1?D`BiniWo}>C5e`=7#m3oI{Dv(6V6Of#IJiVla zuulg`w(I!u!}Rz~%Zmzo5!}(@kW2-VNO=O1|8pw6q~$xS6~b1w?%^80Azg?yP$2uR`xFG6EC30Z+ca{f{(=aqd6a$3LlA-Cm{K=61}8_`hgd= zu8E`tjO+e?Z;c+X(3$#3mF|B9266k1f*@SoR1k!!0L7ewAe7ux5rmR9DuU2fR7DU9 zuvG-1z(GY2GKa;SM#~kx`B*~|whbDRka3v9hca5>ecg%q)nm1Kv)!B+tCdUj{PO0c z*L&W*t25*&jRT%f`$EBa>!eX9O}>$95w~d(k4u|E#QOc0J~rt6L0*X3wpfzvQI3DQ zI+@&VviV33TpMs>Q>Q@g`EQ_?y<8$O0hvAh@!j{bGZM~}>G)q0#p4LnWr(8B5XGiW zf#YjF#IE`Qp;Tz)kl9Z5b8n$%Bs4gs7-Y1;bJZ7(>Wc};NSy-6hkmKwW$%G(u_0zT z{`|M;2?-H~4#yT-D5Q6Xu?EI*X;Yy1-QS}Zz2LlzC}z*JobfAqLZ)R_3%%nCiHoYE zkE-L+ra7*EGQ=s^%^Vvl&B!rA0 zW>3x>e#qm3Cfn0XEw1G|YST$A(xyP|myS*cwe0bl<%`()ST+t@1YEdF#vz%G+Zs2g!C}3vK>|C;z#a7q= zq&IxV{hW^4bW)47DNy_5XRvF2KxjiWf|xz$bI%5PLYXTg;bv(^aXKkR+7u`ryOdt^ zLe<5ixQ~}5XX}pUjs_k2Vs-|2+pBXh$YR@zd$BAeG1BsQsgW-?$H%LU zPJ6*vXS_ad@xDt2_Pcb+!1b3FbLWKr`LSxLvg;2Pb1TCCSi4rKj+9z9K*y<3f#~zT zKrendTV_Qxdl2gIuhEkdk_S2-+DiX8bag~WpEZk9qXN-C|4x{yl@-G5*@@TQNh2(@ zk*6DBv;}nMCp6AaSRsrwDj@vC`{>0lM5JEw+lpxR9L1*w9?y1QXVNL-46$>{>2=CT zqXN+{+nZke@*!m_qS-SQr#yn5lxb6#l|{#u7e{n@iAEX~h@SW8eiy%?k7f^8eDFAW zQi7S&vV7fuIy4%(Mg^kheV1PRv3FY+*xt+!+{I2RgF9&N zD>UZ%Q#@uY< z=N~8NNoBF|vw+$J9nonf8fjD@`sLf$#UFoKjP3k9_EqehZ0pcgn=aHR=zvZ$&`6^K z(7*Z@dhy3!w_-a#uX!sysZ0jls8i4poo1quMg^iP@1_@j{6w?8pP&6NdQw8eOuDzx zs8i4poo1quMg^ju@eO+M%ZV|g=1;bAwa*XPISCsH9iEL^1yP;D;jDpkj0zQK{?^af zl`m~K?UG`{Z13j_|A(Gb7X7pLo&1dm4rmRajj|$8qXN;7+Wk1TwIue4dzkJ0{PMo+ zq%zoA;(aB@bEbMfp+*Is&peu4`a*1K=%Cr&&p$koozp-TJ4=R9r=WAtX?D;^qXN+X ze$sJ4-y%0enB6ijKAT23i-qT5w`tB;`!>|5fbiEZU>ASJ;Cb>&cI^+up9Euj zJ?H;x;EhAN5k{?m4&gK-j5I1B{Ovc>2*>`u)G*uK`QrD?F@#a`pF=p!2qTRO2rv3D zjj)g-GE982?VNxABt0pi|0Uf{88!boqSH(?(x^c6Yrje_{`li+ww-hN_ve^XM%{i6 z;WQ(RG%6sx|D80#!f^!8DI4`~vbpO|-b>HPw4Q9#?}rkdj%N*?V^pYs^FRHOUHZ5) z$}YlQ#I|ofdEj`mgH1@rr#is)K9lSA%2c-()Tn^-9(x}jqB+FK9kZ3EuO7exY6e_^{*wqQ%D6^HP-(E=bE9Bx+%`d82IDS*iFH)$$?_0|B(wE!2HmE%cS!-0P zTNaF0<}KcL$-sV>3X@pfq$eTpJ6z1I3Ob4PwMMxx z!4`8yJ7at<*+@7#ac4GjeEQ31=D6HvBpl_!%sxBXxtj?`I}J18C^9k=j?8*7XLK-Y z2-m}`q+=UrB^?_%E9uzESxLuc&PqDAb5_!^p|g^XEuEEgZ0f9}$7<`~8;@p|R(NC3 z=*z?Bv+U*`tF7~%BG<{IYtf-L92O_59jusZ|~utZu2c z%C)wa6|HVr&)zHfr5Df%T+_pR|DU}xkCUq?9(ZPe-2@O92)DpyIRs($n9XSfH+KdS zSaJ{`LNYruyF1Aoo0-{U10IWjfbhd`gZRVyK)evS6jTrt5Cj1^gwu#1^yoh3I)fz6}z<+T`a{*=|nc3izM>d zY${hOI$g6P{FPJRrM_MhQzGXTmwcM+?)afgpIaw!4KMYgz}(Bo%TY>iQNb@{@Ktk1 zpfhS0U6FOc#Q%+_ zv+=F}h-@wLwpJcp1yKhl3jNux%8FwzQE}A9mI%G)UFT(Fi62<%tIf-9Ap@)QvkLr1 zhTAm;SRH|dY>5E7-|b|FN3NBN+qL}sQlD)8)qUjUj4^)U!yUt~j{HKlMEJeoL8`;o zuw16Taq;oTsf$s2afd(q!UqP1ULEO$Y>Ci&)U#xVN4h&ovM=>g{KP+ymovt)FMQ-? z_|=hL$d(AdzxxZ>;qkZP_^W5?E%Tj(G0s@aJFn(Aw~;BfpR>5q{Ud6U=rM z^``qlI~+!QMkNh?M{<>l%u8)D?|dS8HOTXm zn!*e9B@Mh(`%l^#bjEMqfixfGWFHe=FgK?gEY%Eg%`fVZs64rUdWUPyc54nb$9ew zcQwO>RN4!vcYcSeJoZBQ`&eE`WslDA9Y1!l@bNLbSm?AQ*2FFr>hmfP3w5Xzh=umn z3dBOY0tI5B@ffaG>Is3r8w`xp1iBmkY-#ez|b4;+G3W zt9S8BOKUt=ep4c|lIz8V{CFWf%14%7yPsSGD&4PTS$`YzK#Q6OT2xyR*`@vPH)JQr zPcznXdL;E#r8UoySEBS7GROPl7+%%zs@#kSu#^Ajy5}#Npsa*aUru`auVg}%o;9+X z7T&0s=BNM7PvvGr2)*}TWH-l8Em(o2-e%dj?cvm`Xh?Cub7H}2X?TZkfYl6G$czZE zb9N-VIZ`=tXISbDmN)hwuLK!y&=6R-HD-X-3|PpF2(Vv1fb8Zq0+xDfWkx%BC1WSU z!aZdJtY*MMW<-G9ZYI^uLwC>IK3z9+ho-v5Skr`Ls;^i#bKJ6(+zFO?QRRWz)P-Pk z0U9P)_$wy%*Nxn`QZJose-?EeVI=Tqh$}n{YjD*JSICSAu8*8g zc5@^v31B&g4yi}3U$`V71u*A)8gw>i>R*ofSG5%pJlA}Vj3<(XnDc3rI%PQdE98|R zfkwZ4Dt;U@WvKs@A!tSf*mtfayE!uZ&J8Se==y3x}t$^qMA|Mqot7%OMJSGT*%!m;B_lL-YB9$XIq0}kEpZu1*5~MiL zFr`A{X=cyLv1etCyO0?XU_bpl)y-=&v=pTd87})1bs=MCXi5AyX3shX_N=ARRzzq$YG<;Ohkg&s(T{-Oxl~IOP-DriZ zh|v1h`yy@!X_=ELwfMc}5Hg-h4{SK`go-AEr*3#cRz&bDwUV72$wADtQt#J%csh9@ z$hnh#$rQSoVQAHjR>+D7trI%PPF|bRmwLVCkp<+1AYn&8TFqq}RVS^S6%u4cgw`({ zOLlT3B{An#>Q$RHE658$!j68lLRU0QZq+TfLRLg*oxFfr=Iw6sLdHxJ z7`mZhXw{8Y$chN9_vWZhj$Vsf>F;~EwC}lGvYx6pb`9;Efq(k?NyQRDJY`LZRSY z{C%PU&zqR3kvaKXu~dksbEU}Acqv`%jwed#d@j;i$aiH@*dOzt)kv@ zeBwg#4hQLa@-z!iQ}|1niElL%-?grZnfTB8ja6+!!&7sv<%0&VKeu+l5kb&b`nHhjor$kv){$fyX}2fj%qAdJ+l zYLhngQrW}dX6jXQWS^mZ(uPiO%&Z>Ate(QHHe^%;?W?yZBY+$bsYRQ511-G^bzSgC zqV{0JBNHZPYn`(pqat9p?@a{&n{8DKHubXF+mp!aLIRfdXv3pP25qg;hK!1!eb1rI z9$s-!c!*8C%64%ZmF2OzvxZ0h49Z%g3>g(c`IH${0)lafO}+N^fm!5rjnym-kJcKr zwMH8z1ou*~R(K4&8O%bm19|?B#EiPEl zZohsoNhNuVR^P}~6qBNLPSK!I5tR4ur4kT6JT!CrbluDyn(A6(O%sx-zGB_Xam!YU z9+|onO`S#gQ=Yu6F`A;GsRfgw^-j@{O%bktuz?If5K3?AAj&5`L0y*8gsrYhgGLq% z)_P+N*%ZM#b~e@jM;Jpf<&07zO>$~AQm<>A|?%R*`p-LZy777W&UV-499!TRdEsQw>_MmTi} zrT2dFvXFZ#x^oT9D;Tcz<{Gjo!u90ePys-T@HFR(KD2}Cx1XY}0*g-jvpmzRuIh## zL{TRn9(hqGA3FPqHK~&iHK|p~huUc>iWSPysj)Dpa7>kq{g&dlx@>-0gXL zFfFnu-;++nGm%+6sa(n91IHWxO5GbM?c90aHC6jr%%i7j9z9iQL1g!S@*7m&#!j0= zv%QB4Q0kcZA^#z-0(lmSrwO-)?}ssrs$o>A2@y#DzD*194I#9}#)c^M$>E>uKwX2< zB+6!9cr$74efWRxL#YW7LO;DL)w^qvMydBMm+wPf1#0G*oItC1gT`${$1nMJ4s3RQHioD&d-3vrGv$<_(h? zF$tLvVe++YT8SG@>}COHSBQG)>6RVHt3a-3Y6dBMcx52f2vW#|2&8B1N_KDLa}b=6 zQZF$bvk!R{V`I+=AMhGTHG&i}Ap&XW{jEXTb5buM{j!-%<=9A_@YI8$QX?uM6CzYz zGL`J!NYdd)lds2mN?Cc@<+L zkHWKa22zb6g-nP*dfv&Sb#IYJxyJjbr=A~7lNX`%1|QdS7<6K0?(1>f*HdXhgwSo* zQJotZyJ0MJ$*;)X@KNd_u*-RRHFCiLaDylZqFlp6pa~I1KYk|Jy^$i36H@Ac-@^0B zt3cY(nn7wN{Ne?@XKvU{r^28>dN0q6dgx(YbZXoe^>NoEk$3Q@>}2%!{vcgJ;It zsk4Be`Bv2J6EB)k2~Ccf%&Ar8giMG~Is0bU{>-`Oq)z+2@iTHT854s1Q$mN*#y)%Vg>rurUgr_74}J&?vP5R3|_o3nGAiu7m8{_@y$gs+Kwn zzi1J85o4mNg(j&Dp*j%?Sr8%gKP$-2jUQ}s5lS6}|HUfTrMX^76PlYgNa}p4ZT)<|!*Io5QHTdh~6bBRoUGF5EJ z#!J0x;sdLS{e6A;LTQV#u_mrM79P^$k_L^c@U0Cxbmx->ce%NKNvPw)w>D@q%O?%0 z%=x52RR*6lD1gSAxCS`e;4yhNad0$Y6941O5Cb2iWfZ42svz$?a=A)WZB&V>jVe*KQ6;K2s)%PVX7S6T-=EpC@O6S4jeQ7m};X6IOW$7e)F_g(?aiC@7`2A+1y*oX6#M(BJY2! zm`%D(dCNTR%nan;;EX zo>peYtqHU7!rEkhL#~DJ(?CRsTxjNshRcuYsd>h&QbKE749P%TJ@q8 zvLZt3W!I6Nyf(B_9}wueWwZ;?k5+hVYiQMrR>+D7tcuK#MTFJ*SIJI}BqZj-mwKP_*MBE3MCoM^ zPFmsF07I)@v_e)yXubOXs7_v!%u2ml*}KhT>fL9gH>O_|3=h^AR`p^PvLeFjRP zUYpEHy=M8}UC0X=Gl#42T$-U(FIpihBDDTuZ?couro%+NayfADWVf%O=$B04IZlJ8 zUU))QMDU#5OvMv_P=s-&kb0K-o~hJjsC`p}aU2aggflm!Ic`WRHX=gm<43s8KfE0z zakqzdQTogCsBprYYwgk~a_h;YQLWM_Xhj6h++wPW<7t%fXhz(NqoMk~6RGRKKFy*Z ztkCS4Nu(T!l(A(G-uy&h4WCSP^4h>kEqWhFj&dFPv1*E5RTsrLS%s{KuzF>N>f2vP7^+LHgzFmWycCl zp&3?nV->O@!s_kkQC%F#N6e^FhY2sZc$DkVFRwyVXogkYScR;JuzJE(R38u5D}Yhdai;U4!=moa88)F2PsnIl%cj#ym`629>3pv^x}CkHQlnvCaI zd6p##ji8xSs#_|Btcax2lV9?JC*$ZK^2YhJ_qulr@#0zb3fzE>_tf#Mx5;h$chN9D~=~Sd2MK=jtDlL;<_r(4^L>E%;2dTo{$w0JYP+b z@kDYEbLmSR3cM*xUdUKE9&LvP$_%Z#(F$1+p|xit*~yW&z67m|r=`@&cdxUk>rmR{ zWjtqtDkg&}2davVh>-gH`BV?b248terB=O{TtQuj%A-?88$-2|0aY_lAtNH7e&h>O z2Or_!lrg1J>)-c$i@Xk`G~n5-GNK9xC{;7*sVw6*3}%>dTLk zT^z|qygOCu{g}(2C$9quIocJ#(8UUas%EG{Mnq7J|B3A4$as%H^(0?EdQY1ZjQowN zINU{Qi8USLR_#^kTq4t-Och(Q@lx-a_`s@Se_vm|P^z1`V@G(WQ)5kx^)5Wp%P0^U zZ)05{G_b=e5bmHe{-jV7iFJk0R+v>FRM4{ugvt_DflzRbH8GCHw6(UZ>rWL+>3ps^ zo3uj>r$ji`a7u)O4W~pn+HgvQ!wsiIINoqdgaZzzL^$GbN`ylWr$ps32Yh0lHOhhS zWb;Z?9&@6~V@_0g%!w+GIpUEw#rZL+Jmw&;$}q+p~kiQwH-)(8Mzu;Z1eJm$13 zk2&J9f?^)qmB$=#PO>BX=4r8}h2Fagy{UMzxwn+f*hT&#?|-bAPNuq4-Q!vBR?lp_ zur`_BkZYL{iRHReg;Xw)68{#NpDq+jkz)(VR3Wk`-;++Xqy|!jM7k)NE~$Z4)GOa# z-)?F!Zjo+O_QOETnH#14-zcr6MUf`%`4gs&>M;`0!rh?CPpJ>Atk{{lChW<6p0k%z z)%{S0=xAj&97V&vs~~b~qR^l1a{kGV@Lr;6UsIDJNFT5l+5M3z0hLfw)AGXyQdb2Z z+Gvh6+*UP7+W#c23~9)u2-2TBoa+1|Jkg>oPgCD(Sv)-;m9{dI!&`VmvBnfbCPgUT zy+bF8sjs{|w}8AVrB_{*K@E4w4AdG!4Ve^y`di16-5qO#hiUWq+TT-n*~fF#RgK9lGr09xJD8@cyg`flrs!9!7oVvQ+= zOo~wa(JzuIMmC!&x6stfy7R9guL`*yrFm+GXPphy8bb}46oGo@>!|JzZ#GpTO}*56 z$F0;gjnTmp_%eY(T4SUklOjle=1#KvlexycKn5ML<>Nx1^y{W5G`Y{D%4gie~nnbO2 zqJ~V0Fuh|F)%&rOrjlvuH0W&+@~V(XrFo);Mh6YlT0;$)6oLASZDjYa{WPd}M2EJe zUUI~=V7zmnEHy*Zf`($PDTYjnP`uAPGQ|PdLsJJpUtUCB71Hk0JT*hpf(B}>p@vL~ zKz-dZ*GWueflZzGJnbW7ijml)ImOVlprKf6iXoFC6wgnR-9G>YHgy1W=L~sOWAn@k znie!rYYjDIQUvNdHc;IkX#}fG)YJ*k^FKjd6xQFx#l$P$wAS`bSCdzT zgee}RRdu?N=g`dvsJ%%y`5r(D@48l4i3|SOGc-C*I2qSGHg#=TlNp^Uex~4F4A5?Fkp~(XS zw9Y_77Da%*^$%3%hs#lgJX7aN&i*TPNl1*+8)s+)!QiYj&X7eBoR|EI>iZ*HgjRJ{ zMV%?xcieR9l}V%zpf}Ob{DC1_XQCmCB1Hebp6vX{l%&EGO`RyYeAns0xQ3ysO9G7> z7=(337_umWaC{#!!pKcRfp8Ds6JoT(sKXB+uL21~ABZ)L;){gYc&T?yd|*|vzppP} zDA}Ln}-7gPq<<&`tb{^`aLsgPG=}>^L z@wZPEiie|}s#D@NINT`{563%Y;^Ba&OgtR%l!=E!o-*-p%u^;F4tmPO*EZ^bZ~m!^ zdg5h1=o{jye*9={qh7SOQ7>BCs3)F~sHyF18}*{Kje61AM!jflqaHZssxIn@aa7s! zJHl_C7HeAQw&}1col9i;lc{1$Zz`T_?k#0A_C|e?_dixlCsSRjp3Pg~R?lp_ur`_B zkZYOWU+OQU8s`n93W;=4{0Owp%Gak-J1fY6)(rTH_q8aVEc}UQkMnC;PS1V_YV`ZUwTc z@oNsh&U+3_(YDBvR65t4FC0D1`=%&sBKi0NCHF6_IBs_x(1^yq% z#PDckwL6NFuC9WpofC!rY?t#-c7*pjHP*xw!>QNCGfT)Zd{b!V_UXErJ2cgG#F{1~ zQ+>s{nd6qNsM?GrsP>uO~gH;}Q3l#BReM z-`FOQMT++Umyv%JQ<2YeO48E-pZVfZE%0@Gs z<=%IBIDwCzcx1s<6^|@9tMSN!Pmp+I!N)~Bvf#5J9$D}~5RWYQ)Q4#+0k?yggh3}d zu_h*AaOhwX2FDI2VQ}zZ5(Y;PCSh>+U=jw$4<=!70AUiQGJ+tpF&WQgSG0UsmC9t& zJ>|~ltnxoA+n#U~VH5~Q5=Mb=G+`77M-)bZa8zLw2uBu1fpBzT6bMHcMuBjY$ufcu z7b~0+;aI~d5e_z-65(jWDG?4goD$)9GvL`*+apW!nf2mCF49@ZcV$x9Vr2Hl$Wn1k zm*|zw>0l>PBpmi9MZ#g%QY0MqGDX5`>TqzfQ?zn8@hK7xCqYHR;e_b@LYTWC11CpS z(%~4VN;(|mR7r=UoGR&Xm{TPkj&rJ{!+}nfbU4zfk`9MDRnlu3>yVy)LA?>)LBt*R|I) z*0tBPu4}JptZT1nUDsaISl3?Dx^6POhxI-uJ~=WwwK1PdM&@MV#bP=U&ur<@QBIWD z#lq2zT`U~d*u}!}j9n}o$k@fg5sY0d9J<)W!ZC|oEF7$gp1bcQa7u(3;gqPJ4mgc-t`W}QInNoK5CP!i%_g3=k_HDfE@^P|;*ti3DlTboT;h@j2O%zLaKzz~ z28Wk5Ty(=_6`MG?h+-25hZ8n&a6DlX2L}{3ad1Ro69yoz5msZLNaLpR=t_ zZLNY#n6s@8u2sB0P%%9+KV2x6BF7ezslw*{GG&|0C=gCUi~?bYVH5}_B1VDAod!-u zj4OnTTtdIhEpOOa5yEx z5rq>3WO^IifKVnL&eF=n!{JVucsSlE z6AuSGW#Zw8r%XH?@|1~(W1cecaL`jGzP3>hX@RI(J<<^vXV_U1^wXDueJSTs6x&|WQ;L5s6xQt{K z2?tSDk#M=lDiRKstRmsEkX0lc6j?>W<)8O&7*j$4ywl~C3CAB^nQ#E&l?g{6UYT$R z;*|-=$Sm=+<}C3UE2N6zgRJ6hqI`~$ z;1vjlJyC&R*!2_$hJ8(e;F>uXoSGG^7*56t1jFf9fnYe{_A3Y=aEetS8BVGyB*Vc= zg=9E-sgMkZFBOvE_@zQJ9Kcjah9j5?$#4i$A-QHT3^@T;ux8}&TZQDB#W3W^Tg94d z7Q>K(ZWU{;Sqwvtw^gjUW-$yo)K;)&N+rra&+pyA%laz5JqbjEr56B50B& z`@V%FONN7&iZ#R0ONC@Oe5sHO$1fF<;Q*#WG91BFNQOh03duE#Vg7F+sgcD0EhJfT z&0?7UTS)SnYZk-&-$IhtT(cPF{}z(G=9$mK7P4y#N?L?*L?hP;b_S(7q0h4 zv00|Z4fxbkfnYdNDi92ZECqt$n59549JCY&hNG4O!Eo48AQ+BY3IxM}OM&28MJ{Bc z%D-Y{pQ=D`ts)n)OI5I9IC70*s&NWdPDhRvU+#`9OJ`G*Lm`zKz*bkO0i3v0YEXMW zfK!>u-vG{ODm8$Un@SDfET>WfINg2R`+TDNhy<7HN>zZ1ccm)8VNt0Ha9mWX0vs5X zssKkur7FOoQK<@WY*eZO92}LZQ2XeJYzs!w!I>Ia9A8_W99o*r_GMC$xv50DDDIz- zK8naM7mkDca^c{|FBgt@{Bq$i$1fL-ZTxcKfW|Kuj$-_B;m|dTJuWqFz~!6*!Eh0$ zKrkG#6bOc6mIA?W&{7~6j#>%?!(mH-U^s3m5DW({1%hi8xsXzef5rZHV2QChTC2!~ zbYMr36{lHM?VNnMsvWS)T1xwqUoQMZ^UH-Z1ixH3LGa6k^8>$JI6d&og|h>{TsS%K z%k@1sP~(R0RV`Jp?^P{TFdVW*kqxPW;h3dBFdVcL2!^AU0>N5UMLR)N9Gsu*~q+sc)6Y@ zHK*Cd!j8o*7EXxlVqvFZ7YiptcCm0OWETr3LUyrm8e|s>C&5u{8;IcouHO0O!o@qk zTsTni%Y`Eqzg##}@ymr{6~A0KSn!}YIgSZA|*S&+#1Czq+sV? zZ;j#=Qm*r_w?^>_xu(j$-WtU#nrNX&@TPj?+a!ZBtfcKB~Mq6z}hh6+xoKL1Q zq%N5%=`g>lq{AGmk`D8%N;=H7D(Nuas-(l5tCC(@-r?P<_b(^9j5ctSO__MO7WCfI zjJAo#Nz%FQd?As_ijOqdk(W{&v5SSHE4x@Ytg?%R<0-pXIFPc7g(E1tSU7aDi-lt* zyI43_df&oP!v!2A`Q^glkzXzxsQBf=k!pkIt#ULPIVN7}FNp8)EsE!Q`s3mQKAnYp zB2_Gk!)i)SP$L`uz17IBz z9L^%%@8m|?<)}J8Ux0eftqyRIRI39V zCDrNxhe@?M1Rzc#^G!;B2V~YstqyRU?Dn3s#51`m(XZ~Keo&Z2l)_cPL7rt*a4csL z1&3@FQE+5t5d{Ze7Ey4VO?u~QN@PjCD=!|bCmpdf2!fen5Cjv&APDA(K@dz6gCLkC z20<`M41$z#1Q#VK24LYW8iyo!BgP>K4i4VG4-%ak>FiBs^2K~#ujs*LBJ=b8x#UnU zjuih?NUq(Q;RvZ>&2Y$6AsLRLDkQ@}RfS|Y%BqkIhg}tt;W)hBdmN@m5;!m`5e-Lb zC8FUlcC6bUd{rjCG2dS*wyf(c>jZ41)*M|(4Xyc{>hH;USiSy|B$EE$C|d! zmc+yQbu-5;i}lZ}dr$g>eFu+hnES-`Gwajg6Bf@LA8Xn(mP@ANjY~^$kv9v;#_~h_ z;(tGX=7et^oIRM!}5nvhKO70Z8U<+xbWJh#6=QCyWz^sW+_Io`ZRd`4@F ztq~QUSU=;g(W*Gtt%`GwJ!aXG*zD!YV#h3K&L&6J@XXVi@1$uMej{Z4ik-uL#Sgnx zaZzl}yknNmtE%1G7m&5vWNX(kOuHl9YPV$GthtM;EBD>cl9f9lZ2ferhbec0Tjjdb zJ^cj{AR2p8`E06G5Fd0aCQ`XnWb=Q~+?VSa+2XrD^AOeIXKrJuy=j=*3%sh`o)CC$ zY%Z>^qUCq~Ow}@N>DVw#$D_UKIJqlbio}!2RH3?_&+qu~o%Pj&&@k0phhEF-bC)fO z#4|nlLb}wOt#0dS2R=--b)#+T!C}g}&SI9b^V2=E3fZ>mYIe;btBD68*P+}}GoH?h zSl(B~A^W?%82Tu$K3G<2RVLq)um0=J%#nXR9OxC^6)p7T3xb#aNIDrw4TxTNx|pi2 zb=#$6tq-?Dd+RXQTryg%h02@Vpt{1}`UY8HJlv1^XYNZTGj1ZlA-9sX#xv+~o`2>{ zUvE6pl`6#}eT96!yLvpG@jI%%Xxu-=v$nZpL<~ap6yklok;Szp{WKH~Mq?usYYZV+{PB+(mB4?RBRBCox@f~-2u;!5wTmrAL^ z#zg$v|Rddw8mwMNGW2zV#D!|B2lDUYg?+E(?&*hFd|0#*@Wr*Ln zkjhsN9xs2DtSnycc&;P53Tc9l@mt6`;(4#rvkUYWdNOh0yVR_=;#sn~IBA|M@kG2J zj2|zQiq(z(%3sKu;zi_guRm-;yn)>>+%(cv5aBFRig#rQMbNkR`wcZ^;LhN=rbrYv z_K6?2yp>3mKMYRwiCN85t?|Z-=RB7z$J$(~R7z8SajlyYsH;TPdxW3*c+ZOG`m?FR7RBkC ztGmxxvuPOIvkqcYK|!?X7n6v2WP+*&l7&t84qC|B$uCL$E*EPOYx({rf*k&C0m8 zp1Wq<@oYM?k>CQ6X;d}Qt%T>FbhDCCa>;P}e5$S^Y$H53(*$Hxt{zO26~&{#s1+q7 zg2f-BDvDa)^9EQv&@i-hxA*Waw(hgPLe)ErcPl;be})~JRF4IZT~AgQPhsBGErc;oE z1yqH@c*oSU#%uCj#ol~hB+-|(;z8BJ-q+VqHAb_9=QEa!*rrE%;@Q+>vbvu-m8$Lt z$-#5CX{*B3^TIx#CToh<37)s;;$U!SNHn#sKfOMl8R|e2TMIM4?StZ;6IwQYYAVt` z&`$P_B3bK8Um3mDo;yz2au81U5YmNWN zF|X~hzI-NKzD`p;xJ{e&xQ8;r^NzQ#Al{wG^;Fku=M4Gx!}G8A-*2;;go^Q&)5v<_ zn+DHE5qP zn6fR=OQ8$8$tvR>eWKT>yrs)63c7eYivE?`3JUeyStRr4-c` z#m|-%gSl0Iu+uN1{$O|__Y(IuUpJ|$-iL3esu#wiitA%1vFS?lQb+ia z|FJt+TfES49g((ZD=sXyr<#kUjUzik_iI~PD_*|;A`bIh{ir7mP!&dx z&^-b-q;FcZFEZlM0k^>Y<~*v}Bh>dE6)#_g$fO4BuBWTtJ98CPaoj;%7x|kVB#{^K zt*_+i_kQv>RJBK#4RJjsv_)^x(Qpye4tkxeDjthHs+#O7ADeEzBut2Cr;mTyLv`RW zPuNZRQg13R?nt@N^WXbCP0bT{6V7$RaI^KTIclT{Tytm7FEsJR}t z8m43U&SX{9ntQ0fC7u#S>9CHQh*i}+`|i=JJ4)MIp=ObuJpCoyB>hA z{H2|@VyY^9*8^mQaqfImnBdFfc6ov57C3pn=}X8Q$9Gco9bub3%39KK&!DP9Typ_g zUwpIXu>omqEbYPJs@mT3h!5J1GRIld#8s7Dxx-NRaJg1>4VCn^kZw1v8u9D2dT>9A=qO_-HH%J z=fql7h2L{6RpAj1u|`?F51U)7s`L6g$U6JpGwsf2-1H(Je1fVf+IjLC)%u67OIce^ zLVx#$r>VN)zU$Ew$~P0sN0}STgO)B-PX3Xq@(52Ec!9Q;(ZN)p&%Cs>jMfyq&q_n|W?DUUd>#W1K#(8e5;|adj~GMw+ZK-u3gi z-}Ex4^V4-c=y`q`>WVi0TRpr*moJ_n}9C}}(bIYV3RrA>TqnFY&AE7qx z^R9ba#L_r2OFZU1OZ@C2iN|u0Wxc7$l9afJxuSe|k-D4(FW$0}xEws)B)l&t8adjk zRE73^;<|D9_QBAHW{IwJ%}&R46IX!y|0q__9yzXGT#rt75ACv+i(0TCu21Adgs0kI z{bPp>5?6rRpv$`r#OHmjv?RXrlqh?V`0S5xE9wH8|Nf&3hzr0EPBPvXFa=vcE>9Fj=5D!0U8HK_;h|DpN-7h{NxGQ0EwMH_P;d! z@m8cN5uWetnxppnty&$ zzKy6k9{77b|M1aP`?nqzeirAGsSMeD4vakf1>!RB?UQ%+kG6?;@2v3MI?%80>HZQ? zf7}eC_+EE3a!g$GWJTe#D4y%-7f{r%pr5!ZspQ`a5mtwf7*!tQ)fb#9&2{|%zfcm?6Q>#QT=*B5L@ni83n z6eV&|d?G^xPz zGLN_rJYc#$r?G7?EwX6KTbD!S52*!^`PSsIU!6Sa4+*;tgqE6p<*j!q-=FA>2t~yk zW`$JwhF`vzN>=}D;!8iEYCggi%JqZ#TU1XJ#cla`P9&jb@vkHQ#21%wF6mTBQTkf6%M8pE7E)*1OHq zTiml-yn9=0StQ=c7A1xts~DMIexPc7TXSo3Ys-pM*L?9mkpBDTRK7H_D__t?UDF5y z=86|p_0t9qEg~-pFJ=`ls#NM8*;3!zFy$`y31@>^f7_7nuCk2Q^Smr$K>aeJBb(#g zFHwCA%>ip!7B!T2%i*F~e6yf>P}}$W)OCz7MB{n~UJ=#G%0}WFQbqBV`pD$wX(FI4 zoVTbmvV3Xe*yYQbO9NG6*!{PWmw4>D5+gh zmGpn#1=JYzdK%|@USc<=SzeY{P3ON{Osb}ROwuip!6B_)2OvBzLDbj+56(8 z(DTmTApTvM*H@`ze>EKTsN zc$B;p9B9|WEwrVS#P@he_Sor29~R;IxQ(=k_2q+Sar#f8{3kK$GMGA7KyofhU54_1 zGIEmG=`HFyMrd8TKDVtfM_<0LPlVQ%&dvWfGCL!Dsk!P7uh1}^8fS4DTn}4R*F07d zUjs~3*Z=UDCJi)o1F|DdN^>=^vD6;za7RUAVr4iMi zHy|cQW|MyWrf#bCXxtfHSvs6nTX*X3fAz-c9%^ZS{bfk$Md_a2QgwHao=g43N9ZJt zF6)B7`H=e5AN`bX_^=4C0}bU=RX%X94D(dgANvtmb-aM{H$xFU=j>mQHOB*`zZuGv z)7cQobbq!ww28lwRrm28%7{UJv+7i@e={^=^wo{)TZtnsU`eW{Uo;=8 z|McC+RNv|S&tZDqbxy#D`>zNbMyUpA_T{m-wruxw8Skr{$(4$b}maUu+YntOig;heyOn)*} zZ0Qx96kD5E(*jR_q?k^oy5fbX;}69={h_XMV<%DQ&vrTf)I2wTN<5a7?-xIOGSwaL z&y*a0#)sbrvDSAl!K@X&avRHD7d1HTHO{x68qw*n%Tb+H*gBonIZP+ttcGZH?!}l^ zp$DxWs&R0bQoh(9QRvjqVhXLc^qDZ+aL2gtNBfe%=8wjS&(vnqCDD!D${eEDi)Y@9 zM1b&4w*3Ygoj=v~v8Kf?R9j`CGE6mpk@3yXV@u;|MBEmyN_=J-24>{o+ZeAmq62*Pvj!2k&T zpM~&24Ne3^Q1RV)5akx&yNIG)v7a-%((=a-5tVjALtc;db4?f=_H*^SU~Lo;ubmR` z?)oie@9=H*PkL!Z{o!xS8&LIQ@Q1vP@F_Qq@%pz`)&l5TD%!()ZaR7c+#U|^V^wAK zhYSx0iujvM;V<(P%HKzA{?4tanY83@a*N@w+Ih&#(@14~#Z2zaK<3&n<&-@Bq zLdi~mU4zag_?rM#bwH$;%FwrX7wiCQW-bK5qaMKgcYmt^6JG7~UXcJ*0Z7V#`CnoS z2)}B_kXWO033>u31yuu}G!jQBK~(`LnF|20+=IFG+uNND_h5Fm01Q=luR;JxEzl+R zp#Xm0E}}14tv`V6rUvz+22};1WG(=}aueqJxc|P_uHcqbY@rk)Na}*ly&pqxtrbHW zDq@J98Bl7gnG6`@3@Dik0dQA*3sq-Va0gfTuK-X{7iI>f3v=W17=L{>e-#15U;Y$z zwP%?8=STjlm9-H1T3*0>H~b#Ed>g3f;H#r=8il^xv*-%$w^wj^D_;Ss$fDB>e;V;8 zSqtE=y#M=P{OWVz{oe?$nk!0vZ4~^T`ZeAm-3V)yY9lC_UzxMb|HTY?SpJXirzZY=l-Do*)eWFJAO{d@& ze-6cOjlF{Sp!WPtH~eYDpJXkBzdxhid%V@T7~kOZU6K5{*+z~xU-=sGhRF-~cJ(v% zX79oZmG}4cl8K_to>afYniNTv4z83*mOA#zBulMIWs-rh3PpC7Qna!)OIu@QlBKYy zOft|sRdo>7Drat0qJb!;N;DA0REY-Sm@3ggAX6n8h-9in1EEZnXdsrU5)A}1RibMb z&7wdw^YpM!qBq`G5^sN7FKCVNn`f##LyGvw#^dID(QWf=yYCil9JcjPwF)VqAgh@9 zAi9e8*sC~rQ;k+3#U@}Cq+Zvr# zqoaPfJ~=Oqj_A}#XKy-_FXsDt(}{Q{GC$v+ONwVB#W!!oLmH_UGQXB@HN@eSdo7U2uhmn>5Kx2x&K=?%HLgvD>V1F za%8q3(Cakmto%^C0oCT0G$w~-k*WuoZy{E&U&qznr zN4$v^jtgzhPUvt(2(37iW}6IW0pv`w62jSixVO%*z4iD_PH*MsjCAAkSM^vv3*BIC zE%(&7Y?OxY3}ykuOtKP!S!q1xtl^z(%$8IXU;LPnZg$S!9@YF4w&sb70HIYnlLjjd zX946)vJ%4CcTjH)U2G+ViG9Pe*(ttHLJ+QSw(p2?^>e%Jaalz~*u&Q=oXi_)sQ`&J zF-rtGn#>Y`D^AQ3fzy6wiPDK0yB(#R$}Umbl(I{d;u*U{slgR%;_h8bBYgZKfvCg1 zN-6Aggm1WcjNhYL>m_5hO^+-}#S6J~t|v0Tkk3Zu4aBp3nW1X{t}g`KwI9}H+1|?Y z<2s#r-ru6s(-9^82QZ_PK8#MPwGmQ~9c6ve7g2sa0NiEjR3&P*EWCn`WGAFgYOYO7tIM+lyvHS zcsmN%Q+8XW*ys!l`eZQi6POGZ@C+u~2mq{n;=Kv=)J0YTt8Z}nDRTl7*KY~!m0J%* z`C4i7wYt@*w?=#cJ1djG+z&}|CNT0X@dH1Adg`mTr$ zB@Ac@8Nvcam}Da$gOy9fORj%p23#VB542nDgB?y!WzJ!3q@%Zs4?+o3Ga1PlAH`2# zYQ%uoh_a0U!pb$`|Dm25ddDgf59(A1Sd`S|$hV?^6|59iH_fTGMm3^oa%B8?s+cnd z3NIZN0};L+#vS!8%Tep+RqSTCGZ?AMF?cWv*zR_1yR;(n>O)wY>2d^~Fxf^3VK=`Y z^VHDORt9ToaHaxgz@nsKgA+-ueJ0eu81)(adIfnayHNVUmpiz{;Dj zN7~bK;Z0cRF$=IsDoeyRQkUa<2V!v{oV5X)TA{Z-fQg^L02cH;m}DaWu<{;kCwvcf z*%rX+c5@~I?gU2aamvu))z;_KTOYonrpFQZ3Q@KZ!q?YuPkqMT%M>g2GR%av zkxs(y+ZQG5Et{}TT{DgNtmVQXBB#%1G$waNU3Tnlm5 zmGo5V?vpXKcd&Mj;~4Ss^tv8-{e~_dHegCkguwMO>ZSWyUK&=37U^}=Uw;roD}22j z5#}%!hn$uJE=!pi4)-%0N=<~o6`hIs>8=)Bq2X|G4*9l|64%2sFs;LbHeJ&z^wy{{ z1a3bXyxil(*_{_|I;_4x_?nI9f%n*Zm?af?U|g(;voa(-I(#D<7PPMnfqS=xF86eC zCNIfE2wb~=2u%Yw+Hj>S(f~7Dq`t;|hhShBG}4Uz&zWO@y!&!S8u| z)n==w;`~Uk#aSA*kuq1J34<$ip;ggNtjM?eREFXg7*=(do?*~>hO&(iz-~t!HGF|J zJYRRAGY@d{MM_@ZMDJaP53=LJvQ19CH3G~eFkccF=hmxDvJnEjPukciEzDTXZ z4@5A&!r!)AhqE=!Us0(oWeUWg6^OEp5W+6TJ@pCOQx~*4a{)JDqzdt_!!W`^H(M!e z&km>B8l^C)_%MJ44lv0^2w>+NjyY<0nFUzgWT&HY^F``#+iUy-eM(!`ne5GPvs_z)Z$n_2~A+<*?Q0Dbn2}UVW!Cu_zc!2*$5%*3Di?hv@+QE z4yT_AzBp$$NN)qY_+b>UW;=nMUZJ+XcyS(Inx_GLp-ZWW5V(HYf%@qP_r5ssB3)d* z=?GNq2kZl|b&L%}PTNU$VoiI)OM649i2%IH?*er7|9%$yE@={c!cUp2!*v+cW#d*9Uegd=q3i$p@wh=&B`C)+Pa8LcCy^mQqoG{J=Mgr`~_o09- zwgEe#LT!EOL&I0#>qE&z2wv0h`}J5NVBUU_-ah;aK9u~V9TgUII(Low;3d7QXX?YC z)rXRa5WFU$`Uml4juPHWNt(qn;^QukYlb2*7fUffHtNuR^ zg^=CX$##*ox*`h*yg0|_+ewewoIM5QD{Ai=PaovVUaY$>>A@6}zPzL_wKf6>EAPHu z#Xa@Mwx?cN;i-bKmHxiDLHJ$6azjZEgx&*6emsg@QMQt z%Q;skol5bpP8#Fj6(`lDyyAdPHmB{Rx`20ez%dwSP?ldBi#4&z1L6v+JRq>J$^#+` zt2`jIu*w5s3#&XJxUk9tq6@1$AiS{3QygF9Z+h{@7w7?W&fhbtIKIet448jU#qmWR zL1kW_;`kyy6ek@Mvihgu_#%HJlh=Nt_o4Lnl#dXZ*QYqX$j={jgm1Vx)-=b(?t1IT zo_kaAWOHvRo2i>QZhow3fu}!GOea%a@j}$`hhm=oP**{ylPL6OyX-&J5q{{#SX0`? zpWFIe=ZwhFx%`GqD%q2Y#Bz~Esdyom&h@mU22zDYx|k}8e_KTwOmCf!HWKc$THkfz z_nVMR^%ccWxooAs=SmsQ^-Fk~H<{eO$)wUkNLTI|)JgF>ABZPHptn=F4h94*~qD8u^c=u5#T94akEojsdEuhC?&W(V z|ImsC_31%pIM60%0mm-}tw5nASqMSv`<HIxB09YjDPYLDosW9_kRrVP7~2Q*yvka_3H`lDqmk49j># zB2^c$CPrz}{+m@AFt5hAHmN1fx;Cjx$SO@rl&sREtt6{7DNwx&rUEv2382yT$T9I! ze<7ZUEQ;rP`r~3aV1B+3nUl|E`*Z0;yp+!8Dn|s`NW1Ivet<>S@I1S`idMu|e>VZA zs{1+2$bb(IDb&thND=c_+=CyoJ$U|VXN4fj7-bZaw&EAxf>9Vc%PMkqZFMd|QwV`% zX7_YTS+lLN zZbLnIgyA4%1u5xC*q8A&ykrkyT-i4UCp)j}<%JxDiY`FvT;Lxj9W8;9;&NQMB!Zy-m-~B&w zy97NM6u*XPkm}SRmF$HuxB%Tz9b+L_U(s7si6E&n^O@^01jFC4bNMBW&M=}WgTS?8 zlLv#F2jypO00x(cx5p-~U3oe3y5D|wXJ%Q22MY?@q^-6sCqohXFwu^Tf7S2UOW zg>a*?U&G`$3LNXml8-;8C!(9bOd_hLj)*@>?dR?P{0pik%}dm#{h^ZYMfi9mR# zod)|VJXoYbWe}3`;J^(SgyG%nH7sp)R@43_LS+<6q18|rXbL5J0Th-SGLPXN{6#wz zUNpQIRYjqAftPSXf9%gt3R`RnmshMoPYR(2iA_D~z8+N-HD!At6h4c3@E%qz7phq% zBppngdLzbQ=z41jdyD}D+kiWj2EpJWDBBA_u)HVy(X4+R440!TE&m=5?rkv`uJB*MpsM3}(!}bOZ705$}AA+ToV0st)U20w-R?|ZiZSQm0SuR{JvW1=VAgYg%7 z(25|9gU%KBn*mkXQ<^$7_yrk1$y@+_)16~a>yFmB5KvnTlG8X_~Isb+58Rr@H7V7v-H6oDoip9})V+0l~02?Y6 zpl1e@-b64NFvuBDG8Y2iuTk$E=?M!}dqC3N`48WV0T_C~BCxT~8A9|VP`Zh22n;fT zlDQB9uSVTBe6*bcPpI%;QH1vJT?-%_0KSNBTf-r{?Xtci>rL|Yeaf2EF|j69qNM|6 zb)tbI7gej4I+yBJEzQxW6D?(Hb)u!MvO3XHgj6RQXs4<=qn2;a#hR4KmiA=|Wdi|C z*}8#v(v-;tVwy79Ku}XA8;EMkWCLMMnd}o^#H6 z*?zROM~>@{=Su1BjpEbJk>$BWs!)ojbLrAXFON944c@RAY2GihoA;CX_Wga_zk#`s z?Zbs^)mB1w`9C-vb=D^>XRTk*>GW2y%V)-n)ZdPugKGXSTl3Bir{@0l*O)B`lqu$9 z$^FTaYAYd}-HvvU3(CjWT`cYLv5M?4H`0tDTzb#F_XJOg;;IK9jA4 zaQ0Q)TPNDyIxy(;R({S%&uZO#5y}~+xmKK+$Fwr(;BzKf3E^zzC8)P%?R?hNpq4Yz zXkf?ZP|eS=BgCc-r{-F52DF$=-59jGF)CRJ;p|%6Td%WsIdfW_-pZcO+DPLMXZ;Vx zS?Ez~{9%%|n2F!Ol#PK;XR?(L%>IfxYxq(dvktXwkSM7Qa?WQl%(50`4cbyBegZ>T z;3<=>giyBbD$G~I4K`)(Z&aJiNKKH}F2FUn%iSe|PR(^H8s+aem>E7VGkna=MyXYW zaQ2hWpx*kJ&DjFQn={h2k;0{@=G)ku9aoWub>a*<4>p^#z-`W?X(WKNrQ(*zMW5d7 zYUHGMgj*uahi9ZFNawjIXFJ$UkPVx(&1cf~+T^nU<}=Ak0B7afSnoa0wYQGrKBSJ4 znjmxj7u6j52BcQ`%ydC?XzEK=LOA;t>aFQ5y@WcoMWdKWCfxcs@iY{&J?x@!skTKU zG?;CQ#-J6AvXv0XCVmX{*3Vj-v$|Q%%1h+34C_oE>0!4sK9A~sxm_||$XGJwnSTS* zY4oAfsMtsdW2b!q^VNk`Q@pNEDPp9D+jjplrgrEYOYdD*JN5RrSmZSxaNoqFG52Xq zv5^46%B_!q1JAzJd{l2Cr%) ze{t+F+*?nzy)~h@Y9uv5F1-@v>{`3ty?B$hRio*G=rCI)SqaHz|H8fXNt?4`gVS5t z^O^W28ezQZf6v4?3n@7<<1I(z@4}gM7QA#OTM59dd@lX$O>bR+oJ${ICxyXQwV08b zAU&VLFgwm_>DRAz>aEk(%zUXYu0b-Btpre3KAC@eVPP?s1;= z4GgmIPwlalxr2O|@t%817X(c%^ODOH8wr8zWz<(gr&}3emx}ETJ7c7~WmkO-W$byI zv4xGgGA3PGGK>X}G08{>V{fCrTB{yoJL#VE-d{o)JHsw>&#ed!I+cs!M=&R5fj=>m zt%Oi^Cho2G*yZB7;hQqva*>4DpVp(8sd@X9w-Paw`LZcfY$OD-;|5S)z1b?e>gGDD zF7|Sf^nlgXUq$u)v7O2;)pko}Ua-=DG08{>V-0Az81A$ft8ZVe6f)8~1fTgj2H8H= zRx+&Zj!e4VZgynB+mT5|LJ<2e>Z##VZN#Qkq=RwX)57hfX9)KBBF0$wHe2(#ozCE( z)t*dx8ropy1vB2`OyR1-cFqWApG3X2$<7EB{R2VS$$sC!C!9W?e+@=e=oU-SJu9I4 zfUjS4Rb1T1ns@~Q!&|%pftgHRfxxgeuRv)6gxiKvwaP6}>ZWoFl!^szfxy8TZ=9B& zi-|QcO9Z+-%o2fM!z>YqHp~)%aKkJSh&Rj0+wbl8qPECi5F|Ep5@r(YgGFOe)!viY!U>6;j1it`sk&^EofyJ^tW_u$_AN zV|M>*dPC6qUoL(GiB@yFP}S{1?^Z(kUrUd~yfySAyRX#ebXL!8X0q<2^U_menBw8r zY|LUEPR)bRW_Ix-7|d!m`s3Y72xc$h-nzFv`m=P<>8e2b*t485Oz;{%$ltg#T-6 z-l;8T=CjAOe3gRQ1CgzSaQ3U?(5!Hh-LX5XVv{q@>*zRDc`*S&7;^m$zOnqf%~?!a z&Y&?6lg|Q|&!hoCgtLFHKsm#|VCU_fX)0!Iq%O$%qcFuo|FG)a13R60Sf{E{{08Ra zEchp9l9dq3UP7HUyvEK6Qx)&dNS6(7#oIS`+M3U+&|D|Zps_8J%>tLrq~SV*v%8X5 zQrN}H387YJPGGMYXJ-3R5(?FB(-_3E^yq)u^|A*Y?&+ zhHuSy>qgpVk6TVgHGk07{PNY#2%!~czymafv%qmCSqb6ntGKt`W_#Gg2Sq8axyK($;+5CTE1u zDxXO^Y{OXqIg_je~pzRx^-&^3m##Tkr2XmL7N6)>{%MI8RH%KpnZ$=*J-R!4EI<)#rm@woZ-RW zJjUCug1)L^Iv(yj9;&T`a5iHN>a9=O-g-r!(^~~+-oC~2$Wybb<^h`%TZ+ zeb1LS5=urwAdBO^dWzLZt()R>Rc^+_c7)IX{_RdEW7tEU8ZicK$qZuwV@x&@!r1ip zqQ3gNJ?7S5an#NY8SN`__wIlyp0yS4Vm!U%jgdfa$3PY^$jIN2J7`DDSKn{Vi*3`P zGLMPR1rhSt_fWONk6HP*eu1`#0gVNNSkMrYjD+N|@$bP93qNY_Yp!Ti2^sCPR(IeD zc%rTNz@Srcor*=^1kXSgFv#R14+OHN1{AVOteUH?+v%$8wIb=>(QD{NGGa}^ZtK)q zBf!iDt^D~sBX2b?ozozIEkZXnq3>Fi;vN;vId;HE9gmsYVu2xarKR_76`M$%5NkEx zj7tZ;b1xYQLF_WrRl~>I_10M(&U7FcTidd{m?{)oa)fA;jThD?^BZz4ZNyUt(noe4 znTR14-q+T9W1mxRe;Ld1{siW5%m>8O+6W=6bvM*g@3uYlvO%Y(auY^+W952O@6hk9 zGVuVX)*A5zjNh1TnEP!QD_@e05WaTW9p!6|&DR;jt3=ieMtYL(VqEWWw%+S2;(|tm z0pn(duz(RJ*$5%*HPlm&vUW3do1B?|JAaX$mi<270{yDZ*TsyxFxKb@4CWiY0>+nQ zBZRNH=zv7%Wecza8ioPJJ3K|oV7pB~Gw&D}>@eTfA#CnfiB3qSKbb1FbQMzZWTMcY z?P~5VWixd%$Ib5uKXjwlYhT1(fovp%vGu60hCXjGw%hO|#>*Hfi@k$SR_G1M>V za0Ul|t3=*TuoNQA+YjpAexNcY83|!*WgVIi4zhi9nNrATU)6bQ7gTZV(F%!T+MbCq^Tol)02{Ve{#79!TWi>tOE^+FuQzDZJ5mPM&u3D6@Zv%{#Kl1Uz z7jH}=_Zv^N^TFg!r?0XzMrwOJk7wW`ZN|=U>a7!FQd8J47CgozBLR$+Kk%^#pOP4` z8Jo7+>8sp~(Y~Ye6uy-_&|<89u*0diMv%!LXfcol4l>zD2xQ-^$2K-CR%dbBMyIE; zBSxCXY74!>&GxTQg-;yNKFjSm>af`^!7Bm}YZP+whXAvS)s(^CavtP>uj3uB+$ z8zU@qgQfPK4NkTF1&s9oTDpm3`0~P+S{osFJ&8JM&4z$U7g>I~Ckoi>7GMYJ3Yc_B z!T=UHz$68I1JtoaSv%y=U8cpV6Ud3Nx26efi8HYYLNW zgb?;w+*6ORJ@u2DoSw>EBa$AV%I$#?hJET-qasmy+Q{Uvz~wN>MhIc8Xt5Q>-Xr8D zjP&)Kui?2jW%oJa6}eXA;1_Q8_Y z$E|JGE)7mK5A^ke9%>7YSQD!};7prU9`L?Ct30X2$NYOrM`O(Lr1Z%wPug-a%aei^ zvpk@$$T|}Qea3@VptPmoo{C-(|fM;iIN9q)p#FZ6iY(zHhAWa7nQIuXxAIy3QHPJD@> zrSe;Sq^9?G>(CDt?rm?w4rz4mZ~Sc-tl?X}c~-`UXJypc2Wb>ctVcaGYkTO_;f(?| zsz{CQ**jr6A8PdswppzqQ$S&6?tQqw_rb=LWFLg7U!wjQo@KX*SM)hORJ6C*m?E{d zci9DFD*Sa@=blYYoi)i+z-GWu6(Fi4`yfA(;?$@_Nb^e`QvM#8I1e&BO>7`XO0?%r2&ucRJ&~psJf`uTHvBADM_k^)*}NozTfJ#i_|T|O zW*!8oO{jmaw~D`=D+(z_ph$P326jV%!ZNEKKml!QQ~J3t{kVXV%!2@R*zTBr?r8xP zs;HqDfg;_Ne*jfDyq}$+mNn{^p@0Km15|K;lFWkub@(0_P~i*hZAnLi(?1!JB0cf) z!Cg_6ce9aNT9H=u$Wg%Iu|X<0NJ-{Fkh&W6&(No=B(-bDFrwJ{=%g%_-VsIWA{(jH zpi^gmkz(8YN^#IMCj!-+knDp%bwBQ*m)joNT@my}o?@ell&DS~k5U!38tHW#G=xfu zga)cWK_%G-f$BG?gRZa=)%ZrIe{v#4YWRJ4dlaeLETrl$?Q`m^NumNqhE4J3zW8G+ zBqaMFOx=Nd=mmD7N_99rl#?n_*KgGhC{&dGTRWAEr5~5gq^TE^qrBuOW#&PUdKCB1 z9c}+Scu)~iq>CPhZ;K-J30vjqtxlEo$Wp)!Q24C1}JAt>)Czl*=KGft03I0c?$K$J#20^ z4&TjtdVv^CH>5$hfBz9xw8`4i*B#17%_?8B4N$)tXxu}?Y!rl<)Soa{41L3D(C%K* zpcSdcUt~yAZil~u>iKg^&%G-2^f%G?YaL)4W{{~hGLlgcWIq2Y>WcSTuGrY>Bd&PAA<#PBcIt(?C;eXe6T`(A@n8%ojuNSc&GK3Re_~#^*Kw37RjxfGT>SrRc;8 zMg3*Q=l!N+A9J|x<#3hEJc#;Y=zc4LH&#S&-}f}6djtpm6=Np!7fa6rHtABG0VjzDnOY+w z83jS+g4a=3yu)(Ey*iw(=sPl`djj8l6w@<&x}6Wtt;mN#PBXw1h+(GI%t%H-m^p1_ z^Yidc&_xz9^$o1ZknREO{TB?GyrpKtYNwh(1_l_7F~HOs7|AFIFrPv_G5oN-M<}#9 zebM(ULmKpd^=}w5;oI#jb6Ex3L54<}yD}{RFD(FncbJk<5NHyA$9yq#l@-5t>m0_+ z;?bUo6Gr)u{s*R5xYO=}w^wAAJ$xPI_tzd$){iy$D*$N3`6~bzUGP_cbf)O{Pb3|{ z_$@%nRDKJP_FH}nkiv%F0;DEOtjXuzr!)cOy99t`pVuJ4{4N+}_Vg3BMPj*9A)o9| zl+yW}mucqR{`iB%@HQ(v&(1T^iVWy)F_!R>OslNhOf>s2(X7BQNKrO``s6yRCTkcR zR+IUU4{7^<(qB+~uC?)*-RD#`==cCx+%y5)Hv#;|M=}h8&z-nW-f1@i`q%*yUjZlV zbGH8n29TOl8sE(Xef-?;699f>!yx=j_$TI);REdc)L^SKri(_J{|SiHOk4drs_bJn z>;d|00jV`HpwsuKzE@&S=>^;)$Pnq4{5F@_cxTDmu-4V6{rWjd;>ZRhMpkM zBN+yv=Lys&>#g0-xJGt-NFz2U{1w9|^pd4((h>U&D85|FHS#Y~V{kU4A#Ofm23dU8I(79o`S@-m-~`>k699N5yCCrV?^~Eh?qlsH!WI6=!4B!Z^qv2Q zX&PQ(vvXFbM(hB?A%>j*up`+8VW;Cis7L(0)0{i z=mAeW(!7cp0rfHh>U%*T*#&_o@&@LR;i4VR&lwJn|0h1A+%xSpRN0002JM6nC+Ys; zKVJAzU>F3Tuj4*>qwSOF3ZE3&$A5rG>F2$FLIG;DXs;X8i61G>8GeGn zk7O8xpCam#O;$k^-sJR17J5iMx7}Vr=^1CyQ@>s-d`#0VlMdv-sAL!fpY^y;F0*~| ztVVW#NPV|mUqu1B%HFBXYH%tW^!iO|@R;F)A4w+lkfeW*W%e{ZLrD=W&OaSX?$q;Z`^?_iXN zKWt~7wu(B_-^}B?m87W7rcY+nBDQC#}M5l1`~dx{YO0Pasl{ zWETXUFX0Y}zx?6*o`-bDCp!+?^MqH}oKJ3bavt!^Bi$r1@B{)L$u0;yzrj87Kek79 z4zJrfeVCRcTq*4hqqwx&Mdj%gxn_S~pU&}pn1{pvT&HuA-X@EY~a^<0%~Mo9Ocp3H4YJiK+>83UO6!pq*+MW-MSNJ6lL{ZX^!T-L9K@_gHbJ4+i#=g^~=xMx#*$}XLx7Ai1fU{Uw@0L`(7KP_cu7z^|x=$5c1?rfoWZP zX)Cc8+`=({`y<(bW&?#SqYkFcn=OD!Qm-8ib)8U% zJ0VLqIn@n*CNiyspiJE0g zB@aYxq?4i-e}Sc<&?XC_i5*UDgTE;P=0#1DI}lB7;giGS8v@b!cVliDzTDodcegtI zl82&p(!*9qyn?EGq^<4>y)Y6hA7(34(FLTUlTCwQ^gY}!=WO*$1r=R8>4C-n`3s8C zS8R2ARy(N=d^Qp*Hy8!8q?1j9VD!zuqJDX!RnpbZ8Fc!k$VPGh$C^p+#9j7#RNEJ9 zii#D``%6)hW8nrOA0Sd;8HAw&U&g$0)s}rm-2{GsNbkh`{dG*$@DVmZD=Lc5zyk!_ z*D`w|_j@7+fF#Qx0QLS2^~z^#uPj#VF+>t#xDL@yg6Q1WP<2<>>Ym7$g&0o8#mY?@ z@*xc=F%5#zo2XxIwEA5;uO60&I3OaGavNX8APW7~(st6IQ(J!%5r?6G>&Ye)xz9vO zOoK4==|5v`nYL=V2B%YsJjAdWBDEB@`#Yv;_=v6Zr;KTcp<*x{1z(1KR9FT9=(ng> z?rw$kP#-@*q~7*V(D|hB$u>css^HzHUh~GYhrlI?7buVvqBJVFm4CR9Pcu?FIij-QiSU3EX|m*2C)`z0MtzvRIvO6tu$`3$P= z!S;F9p%v-|J{yUZ53`lQC?FWgra>@T_&d}u6+c!&>drm;G^TFoN^2XpTSe{+K1HTG z7u1?ADwzhMXdWKj@3xcCWrGTm5viv83x0QFqpj{adewAMsc5qwHet z%tmMJ8Td@pW;zQ&%|x6Ic6F^nvb9xA!{EsUWL+qzcdOn(Vt zIBu1m!Z5obFS{WXmO=QL^l!{3x3_$9Tqi$2q}LVh`zNMrIBVyivnyy1JVDYGenU_| z2$C#=5cHupQLp@feRR3JV;DmWgqg_b1FPL!Yq{(Zq^9I!_@-VyNNgn6lwDTN6d*4C2Q^< zP~<1s2U&+OCL)gQkn|Rlp~wqGN=$=bv@5<#M;;OtnP`bG&vOtiJv@ds4a0lekS`r{ zLf-7_=fD}tH(>w6AVr!{i8V1u0S<>5qyVR93{s?#1eRY+DiK+vNG(McDN?Ftkpdk3 zFa%8bUBOrrhZul<4lw}!9AW_eIm7?~fI|!r0XW0}A%H^+5Cb^G06~C5jG8C_eIbxx zHP8nGIK-%l0`gcnOX!sEfbfV>69vRqo1~*84i``p1)x{QIaZ@43bX_JT87ogM^-Ej zEkE_jBSuXv06n0^VH+t5bcAoXIo34C#m;wCI+w`wCsW0i-c&r<+*`_K>Sm6cA8T6R z>5mlC$y8Uo5Ow^an5RF~RS@bV3jNtG`%iU*AG$HtRB-X0D2wvyktKccnAwVkw=7EY2rWnPN+7AXP}Di{fWU4XkP-9eX}~#*K@&pIM)N`=`H{ zIX>1je%{<=^S!;`6Z+TU|z9u|B6*jOocSlzI7(*x7~J=cS=RO zN_5jyU`OgvU-U^-{j2R`sf$)SyKVkX8>p})klPJ*!Ng876@pz0nl3`mSpx}8oldtE z)UD$W@uVw5dwmLp?k*d;%PO?@7rK&)AHt+MAI9ctVJQT;ucBTX`iF(wffY`x2079d zqpq`2Ks!$y+Nhp6(p}wc&qK8zZ!uT5#HqY?$N^`#2Dv~YCs_(Xu7Emi(n@j@YS29) z9nO95bPTv=OZROncA?tw2F&Og-U7*+WGRHVYfzsJUv8(kjy`9W7^mcjfOd!RT#UHz z&+TpKWqOyDz&Mm)E|APgmO_|&8TZ-|J`Avq|1^qp>G3TzLx)3l#5kxTVrZA=fYDvU zTp*c~ECn#P~=rJ&I) z)hDqg1yZF}n-ZzO{HKDoN~dQ^)+!w!D3K~%PEmr$@Qs1F zKk>KOxynA$pw*-S6sQxdvahZptLniM(0?#I1&1feJP1#p#NG2AJKkTmNll6pkBtzj zzNx2SM1{U(wI=pxbZV>xQP4{(hN!?0C7TB!D)dp*JvUkIIkCg(o{TjgX)dvDJ*IJ3 z&1Gt)ZB!d@9`OI{oq2#;RrUCr*OC-anbNXU1c!kF1=3D3T`9|S??{p9UIgi6X409o zGfA3cy08vQ%c9@_rGT=jh$1K`y9%hd@WYkO4RJ-q-LC>JDEwX~y|*v-p7V0^`p!)r z+<*M?#qW2{dH0;p+3pfURNW9In+GB40^C10@*H((Sq(*ql+tA4z;zf>q3^LE&=|bI z1oZTvA*z0eQhkQt=cs#Ht*j}DtP+Xx^mtL_4JoC!2c}>xC3Yd}ccP=Z7_jZON;z{XYfsvP>8sEl7li@keD%Ia6ipXHV85@IB z-QXly2f^vJ%TVXcFz1X+(}a`Kn*pg0pc=<`iF!eq#)04oI6Y=~svDjp>mWS+8TU?? z^X)|GN$Ej=QCm=+KEflw?1sSQC*ZuR;i+zTlB|R9v>x})#XQu~>^!jKAI=yXp)qLfn9DSaqWpI{nC zu4#6v91NnQC}@bPC!!?tAVl4Z`{&Cn;)mOv?ny_K{euV!8@m^shY{7rX6E5Rr^12Y z3HmVzQ}b0{^ATkz**pkOKSRCKdX~le@bymjB;-kHW`5Bl$n|P z^0VAjKuP97fchuyo?l=IDl)Ip>7Rr|Db;*e;%7TP#ZuJ%0ZdU+O<_vDIx6|%l6erK zUPaw=f99Q`vKcfXPfAmXwdZ3Q>PV*W$W~_n2u6Z3Gl{xRP?C8NoSsCz(|VCre32VE zoc>7%R7|Pl`&b48%38%$-dfi61p+Di>xKIPCPURxhLX*LAa!8&!BZmg&t+*sNNEZY z+khhULtd6HY;dX^41}cJcY{zpAtadxLFg{jKSTHJESkaxI=$1*Nkm(@N*ACm-GT9> z=O`@ENh9fkjiJd4O}eau!1T~{m~$S-8jwAj^}wWbRpXkgFnz6~d2*UK=+xNXCHml9SS1!sAk?#{cC!wFi$U>AftIo$4q%N!CGl zx)%4&k-VGVRMv{n6?;mL94)&N<>?$A0lLZ}Kp^sydE=;_^OIy9gr^@5Vcv<)$BD9& z(qk-ZuEF%RHuLN>J$N)pkA0ZzR7crKvJOJibEtQQzRsGb@mXgpVkf7SzFnhJI8ccTOhK!Rp2K{7wV9|Im#T-hCL&4Fz9X3=V7yQCP12Z+?3<)*CNfD<{v(qF^yx%) zlA;jVHvyA8qN6d;^?52$K$M^o1%wGIQ9zua5(NYbDp5eBpb`ay3Mx@Rte_GF1Pdxr z`l5yW%w$^=(O1cj8_|i<7cJuYqD5Rn_$}~2{RS_!d`om$rf*{^-4kukWjgzk{rTve zEju5oxBsn|x9XqT2P^ZBU}gTuvP~R+M^2u8I;7fVFsEO9c(uM3mO`56ISrW4TK#;q zdx_XRYamW(%zMaTsP>cin0LjXv-b}ib3mnJuI*HRZASxhlBE#lZpOX#dCuJR`k9L> zouk=fI?7xd9|p&Vg3O$~5GKzBBG1W|LYUitd#%e;KJvs!qXTN!BP?1eDQ1%f%*QV4T%kHWk*^bM9M#_LB; z?Wdalfw!V3apYD7y0V0}&P_L0m))FnBnm<9v|};ktfTkCbvz7GDzVEC-yhGg+=K zvz%lpB+LDBDn^|3Yn~?-m9;~9nmMJNxv#$i)&6#_{hUT;jt*RwGh1`MTsoxRnv>7< zBg{=jPbylYct5m6f1Ybmdj0aq<5A|~oVn&Q?Ssagy%1)A5{LncY$=4f529WhdVrOd z2bLv@5ju9mTa+HiJmVw`xiL)lP?_$515P?PZ(6zfY~^f+6+$Bd+!LtJS~u`C(K_TT z&1~Yb+0|U;wiuZC?l~u+5BA3^&4uTyGabAe4fuRuKwxT{8x`1Vgef9IE z^xEdd3sBznXS{{CIAcZ7a0Bjr=nB>ZpDcx za7T0PSF}2{_m?>OcDzNhq{(nzGF&Yzg)sMJ+-pzZUc0KSEYriBQc-#A2^e!Hu>8D7 zN1%xVZd#ipSACMxK%8VLgt(8PKAYd^v*9sLm$h-DZ{d_?CeJUzXxp37HiD2g`a(r& z#7%nhlHO`yD1@~QZJ5Ve&-3h-7b{A7+>~|&UPlk8S=-pSCp^;`D*UY{>CI^N1bo?t ztAU{~-X@~F#hA-Rn%kW|tEHNZDdo4dhoahJ^VPStHPM$Aw)SSQ7>KRl)&TX|OPE~DJv%%lN1lbU=vVajZ1iS$DA~@GR)N}bCRVH<}5T(SmSsLc4UXsYe(ofmSz9evckJSZBdlDLpgKH z16^9mKRRugt5fErCIDgXDb#BlS#4%z^%JKw^tcDl(HHZ0F(cSn&dg5&(#*-PJtM>& zjK1=*CnIh|t9IhleoFkQL(m-8!HJvN=+r)Fb*22X)h5T)XO5FCg%EenyHT%wAFs?h z%Di@jj=D0YbU15%BdYyBICE`7fo9Ge&I(ADvt3r$&q0{G8~56;aptmvPOsI%+!Upo zq@P%bGMDDeP3v%KAGADYFN9!8ljrI)&&ifTm^%tR9jU-^e0A?9^$&; zy(n?3*_r>yRzl+FPa>FixO_=+H82#y+hM0-K6^gv9clv+dzMHQt55i(_2yPZ|2(1 zF4Nv$;^_BQTcjsL4RT(PtA(Wy=AJ~oHp8++Lx+CiVk-M_FCK>yw}=xLA9QLTIO3$& z3k`8~N}OaVgt(2fP_O+s_uAQI(>^^}PU+RVMRPIjwOq5&5ILon)C_TTN}OaVgt#A| zK5NDKq;h4WGf&vW(T`1(lH6l&$AGh5;^ATuVUnZo52VOxi1R{R4Ge_J>$PN|RUe-8%SF{~mU8ED*06Bm#vrDP}sxAEwx&)Spcx25`VQz|Gkr(n2QzvkNl zH?%urg}+T4eb|)h6qDcTGrviOLV$Z3_u1z-aJe#{wF^oOgG!~1xL?dei5ud?UDZfO zoQ5XO?8Nz!<7#0kgt=|=QLjCQ*OU{7oL;MkIi=$!jWbd0zsQ-pwk%@!o8>fE(i}Gl z3UiXB5awP&y|$gTLXk3`)kB=p{&(+el(>Cajt(z#hKs-fXEwlt0-R(i1h{8$r)^?M zEu^6o564(D7-5l>gHCPjqiJnbXuBqr?&=%pPUa`~Cknmm5}ViL z2eR2rt`MFzVnJIIedh*DCh3Jr7rS(PsdR%=hfwLLu@0fYeJgsKN~0VdUn)f%9YUoZ zT8B`e-J>sf<==nN^4<~18l%6~@+Lm6M%q7!ZN&BCo3sSj!A$n@AJDH0v&h(_ZCMP#1b6u&tJ-kb9 zUK3L~>$q^*L#OODE0TKs%io+ex~*yS{CUfZD>rM;ROr(?HypwTIOmD}j_e1ttb>&2 z%}-`j|C1RtF%>ejzX$bN>+Ni4A2~wYnbrZF(s{?|i5PU&B)&C$L2IzF1Jbp@u0FAo zOod?gUEFUk;eI>R?DSiEvrGeaaivpOp)`u!9KL(Ds%-ZxaM{jY36t%7_-z0U*-kbU zg54om)Nl9Y*lq5p0d`6Uv7!r6^N45MV|s+TYjC$cgt=|x zs9I2`bQmkT4y8`dJFYq+C@`Ec*wrU?lBp2vK8O456&$;nS^dc_rgXRFn0Y96ovg+T zC!E>`F4@@&VM@$kl$f%q5bCbQ-S%lt-GW*qJEeij1wANr=kW^pverOj2kp(8WLKZb zPIaL23huX~dA3;UblVYH<|#3yc}i?PMqTJP?6N{r*$_5(xz3DJd^zy1p&77!Rwzr5 zTz3WPw$@j9ve?n6pE~<2m%<%@A1=bEJDWi_X3(j8;Gl!fw3$p-pP5cN;)6i9unqIt z3CwFp)&RMf(kx~0Bushhw=7LZ3T5gC4mx`!gz#yeOz`q#f{silnF@jKz*f|6XL7$? zSyrKq&{AL8XD<~p-MQ;=^?6eoYYuj>kJc*6&}Ws_rDF(yt5u7vJRH&Z)bqENJ*! zV|qF_UhP7d64Qr`gIZV$VXk`vnk#hqii~_MsaYa{C1vIpF4|L9Q;5lPrZG zchhDJxzM9Koi;pPKX7rCZZ5U})7`pAVG(yilozfU(k4{wm zJnu+5%c4f$vK{makf|}NuQ4@5NZC{fc7MVB_OCo$w3St9ny^#aF*tlNirsdGU3hJv zYs@(Cs*a(qKB<#Tg-~||?zVs5Mcts&Z#7Y;RAc@LJ;)Yg^UG1KPUVA_>ZG#8RG4*I zVao3?BG7fb2g?TtTPCgzmJDc5#k+FQTpA!E^yQ~lskC-vxtx_YHfG8IDI zD@oLEKg;|!JlE;AdZ>#j)tA@cUFr9j_K^k6PW1yvoxKv~NQw_fQZ%%6vZ)a2{*C(W zVwUP64NkAs1f9|>C4L+lGVbTl&CNR14<2;VQ3sRi>NM5GBvS#_YnM! zr1kk-e%tEwTN}F#^asI|ZnE~Bk5YFkAHZHrNS%iBMbh<11DzM>YGEmWxg|DpFX3MM zQ#JsMXveH_>S@q;yZqAXlfzNtQy8JM3)KX}2+_jSPv% zY1kmQ-`ZEW+OxfYs^7q~^pR!i`wN|hJ^}jG0kdaNmwN`XsSxb`jr;9oJkxa#Ig^E+ z>Qafi%~LSyPGAk`9vx2I{bf!=qLW?$Fo~{C6P;u!gt#A}K5Gr~M0Z)4*Xkio={>NM zw_wB}2et!8obv7!t&Pq+p@%r7w;KL-4n~}{h-c{4txoL& z$DDbp!I!5RG}I%Kr4Z&;o{M_zMBWcgESvk;%;{+7lwN8W$YbhRv$I)_R;%H^0H@w>EX1Bzpx4LzCMtz$mlkadqQmsqN^tCT~9&)}JW! zu1joQlOM=tGr2ZK9NXw@c60)+S0pKlz};rD_wUWU^}0B0QrG!Yr(GR z0pXCYm+Irq!FeAJ&g-%cGK#xx9p;^(Yxo3NoPQJHq_nep#{jDEBTV4~%MOD0yUj#o zFQ6SUXO^lzv!u&92u*u$#GKQ*g>QoPh=Xk+oRoHuMy*9PUdK0`FDaXC2ZAS{xH3G| z4NsDF5S|X}LA~>Q&Qn)ew6`~kx=9Kir7M$nZ9)|uWD1A7of}*J($qur>88~4q14l4 z9E7F)HlwcjEOX6Bt}N1%v7~epayzPU=qs$!8{Zr>mVha{NleuzCK{F`;~*@(in^xt z0`KV;W}UuC#*)$l8jo&4;s4<-h06-lAS5PWDr;b>7nmgDATT|57V4YN^A_ZKJuoS~ z0CfZ2<(A$Y5EcG`t z5si$b=x6R{c)6dU%Q^^7`{L8)|6@JK;bmnL9ZpKQ>DV-i6ZQ;8AaIi6pTVh)aFVQp z;B;dK^Ulx}45x9La8jD*{b&QGux0U5Z^EFn(F+7lz~LQJ>s4Rt5oIUIItWh59Oj*& zKQo*T6B{R@VozzLHzAKHY<-u{AU6aLCuweEvQr&pC&@YpPFLdI`C%69!?T^S-Y)iN zHk*_lEqm-dl%|7t>$Iiasj$DviDn239LF=csruZc$v6m02c3`l=F2?N4|F(vlME)M z*U0kcpbGz-!*oMgZ5jlafRnrirh0)%G7bXM)3|T`fhVRmEi5U$cl6@~%F?48{%oUD z;UKUC+(I)f)eB3KaS)a+>q33=MaEKOQLEE8$r6*&f!DLoMiu@ehpAB3Wielk4lgfd#D5fGdrTa5$WncTN9BaAT|(50)hjPBp^BvNdm$H zkt85K5J{2(#9QF%w5OM`rbXvw()mIzk+N@2FH00s`L&6zLMoGvw&yaPeaZg3mz&g) zk3Wdrq{ePA_*=bi@=|VjSPOI8#)sQBH8K`b-EX-K_1$Oq6elOv_j=${8bf{jF6;xQ zvhsh_ptC&*B6?DQH~aWr_VM+gCm9Pt@958Bo*R0Qx$c2AAu?@A zmIs|Z=+Q1S#HxOXrG-7&SO|MR#C`Wu%y+}R&a`1qwe(~?rGfFi_n_>}Wco)|4mm#{ zh zSMnrdA>^(3JVu^%G2dpKQC6{OV^8TY+$DEoAJEQ1N2Dxt1Q9)83UAP>SM(%fA?Uq~ zdaiXGKgM}shcj(xLr{4AJr{eCL>GBV z_W}=n5GC(hEa^qAYX~@bz?Qg4detXAE#yhILdd%i_gw4_o-Xo~o?i)l0VD5p*6EGz z2sn4Z0TIJpopLAH3gPa+FJgXcy~;-{?Pax^F7A}xY54~}A`0_bbID-9$pa3y8S?6s zJjqrFd5_$SlGn~X_cW0_J$rgeDesOu(GNI^hmMIIbm(c>qK!$B)SP9h-?J=|u@Lz7 zK~G^>Q+cCwVOgVOr#(IJDdoK{d;){-5LRzmf!|q>La9Noe$kVRg`l_icFc1v?12bv z=qZgiKlds01HQ&%$Bx#Z=RM$xgQ+>IuQ~M;DUz`e`2KhY=DVR+S>8CjtYp*1p3;Ex zhEHSqTTk<%IoB9;_N1wqNqqI2_#|Ti>@9oo;KBEG{@`nS&5ER6|ME9yjc#iiJ%8Tv z@T?IFX6>1>?&f)8d$ZGbZTPfo>nR;1yWqv_4B z7^($$N)NCN-Gu7Di0i+)EN}$yBnxms-*kG_cY0b~u3nm)SGPG&vK2z!ShQHPmh+r(T1PE%p3+sl>mNhOyPp@!bF)tU z1DNxq%U_1Px+PDt6++%WanE%*%i`%7MTN&m6mF#d{i`Ty(|D*@Sr#hZ?dxN3wJBNJ znzYE3Hl#Gkm1gmp(6@cTIAogse%f)ck6aD}_N#a)F7co=q+3zD!$_9>G$R zrAsgnvh;j05VQ0M27;Cz!9di~BNzxok>+%+O;IPljrMuXa z;g<{(ovFT5zL4sQE=?x#nRF`M;t&3?#+Gm zh_Wp?a>$g9jGp-b3Yo<>IF_^q6*6GHV9LdMD;FgrA&`9+_tkmKSHr8Fu1d{VT$gP?A&}83RyQ-d}@bN z@c?DA7_gmfAgedXBqITk73Utm$9;7R_toiQOSYbJgGlM}`W260jD^l)^ZR`X5lhpq zfTg@JOLX zq$Nx#hh6nqOlvLgvD5M;g-63z{qZFk3E}JCsHa-fcrLiGQ72+?rAfynU&0Uz{g;gr zCzR$zFl;jK5nm6D&CJDN?FB&!`61E;z43eszXe% zSa-!@Trv{E*myi2JkEV}M_E3ghAgI($i8$pmdHX^uwrpsSpp6aWA+l5VzJ(fMcGIQ zV^i3$pw4b~Z~#Eo>6>FqCLqIr|TP1qG~|1D0rX>K!1!q!Zi*u(|_GvJnE09>2n(+YC}6;AiDpl%`aLny6eigS0qmEk zqlUi4W{Hi>PDi!*I>YBHQVQo0@BA7H)HL2L#+-T{=IcA;q{fo!jkYFIdD31Eu{>b& zf%N;NAs_MgNe2~) zlR70jC%Gk)?vBpwOXTyZu0&sSSu&C9>hN##nJJm3 zDVe&P3Q-Ut?0(!+Kggy}k(p(lN==y3VTB7$M+w`&Bg3T)0VND*7Yt!_MwnzHgs|VE zo;r(-q4#KY`YAPEO78()x(wwj#WPqU>(o0y8BBU<&TOP=+elT-V3LgxzHV5Kdg{U4 zQnB|Sq53d zZb_h8e~)*K3!0s32Zt_TH_FgeUvx<(0_Z9}v-$ot?|Kq>X7fnyrxVMHEMmNrZpCjt z1I24O3ks3bI)aH8u&ZV8sxQ1G69IS?AKCmZ?x%b6B5Tu-(@$-@NH@Tg($|vHP`)BO zEF3|YzDPG$q~K_pFE7oPUK=5V-G_VXbe;)TW_1##l))NTqJ$mG2nz=lFeyA5!0HY# z$wmNR#R~E7+pm5Kt`I{nvGM$2jXL>KnjSW08J*zRoLf5juFmA}Hw!ae&QsbEcV(qT#DsCdhBpU&M z6`Qcnp^j?p$4kUzWvPI8#-a3_%Oh(szO3uG+H1?y_7^b9=`U~?+9a^*6PQ*L0eBS) z#H&}o^j#!>MYx|XDN6*zcqyHay1X03YdbFxuj-)7UzE0!j^3CC%u55N)kFYZ#q9ML z+)p3letNA=yp&#YNi9Y3(sTTf(ssa=I+MNXD|<;MLh$-E?x)xA?yI$|$|BBQO6TXV z>B8^|eSV|nu0oOSvsFGyNIwA;N#j?P-c7d+b{XN#I@PNy0Jt$6R@f?WaSh zMNdhrPo;aJ%ToQ>zGQS>vMZHOWzu;sPg89A9r_sFmv}8Tuhm)e`P-U!d(lOG4a~hL zAKs?WY$T*Tc@6i~H@L4ZX>j_g-Tl)-rZiof_%{@?!?@xT%M=e1WI(Dg6M*U`0JM-v zMgkx!j@Zt9*B^86i0u$I0ozWZo2g=xqjIr=VK}8H0u^Ghb4Kc|`2x8Bpt~!wsHnPL%snmqo z&&4X#iTnQ(C9IR3E(|A}dIuXs=ish-(XN)Jtl*+%ET(i6_L~30 z7z_P?#fC#$1In1a1ZJdI_ajBgNC;zlp@pJVVCf*zIq39Na>kVA9q0WPW$YJ>v8KQ> z25gR+JXU9UO#bB>6bFu4;XoH8qjz8U@BYJz5b{+!s>1@_W>OFvP4wd2Co-HVgH) zR`j&rs=fp!jrowq^x6m^>|)$gAK;#vD$53T79%B0>B{d}uc3sk;d*zJRk#5n3^*ZT zl34XgjFd3RMhIa)#67i-d1|;*X9`n#cjH0IM^=BpV@s z9rHibQ-96?i{#6SE$S4e^zPEo%b4DwJ6IpK|By2<1PCzT45tCC&H$5agaCHVE108N zhwv_J%b?R!ZNNxRcEpr&*q8o{Dn7&&zlt!2(VokZ0;IWG@8xQ}W+MTR6%S7>yz;mW z@Zl-zK%T{xb!df5DUE&SB^0vlyijaybt>*}9;3CN6eJB~^#_?`BmlDF@u>%YkNWBy z4%w~A6&Sdrvc`EEZPyRQm_}@5WZDr9RK#)N{W@@U%>Z?W4NGKZ#f$VD3SKC-y z6V{8EQXUKc3q`DnAr_wK3=KgdOp1^ujn!Qmi%UiV2rJf#@4Nem0i;&^8c$32=l&7()z>&=8=IX8fx1?--x*b? z6;J;Ys`$NJ@l=CT@c?Bq`K?ux$?7hX$wmSoD`v94J$rZ`lF3fuzPhU2>8mzmq}$I* zr|mBv@xUpF-bb@yF*30%1N+Mu>DH!nklP^U1u?xg0thRnvHwFoHS`JAj_t1%Fr|aQ z55A5jv2XLhu%W{l7y<;CbjH*GR(F6&HbMX!^9JgvS)K}}m8Ame6sB~YrSmta-Zyi- zXSW8FFyPH1Gf}L5qDY#;BpV@weGT{2)A-b2Yu4$h)PyO$-SG|lV9IEg!@^6QY6mEV zNsmI96jpaBOtKLI*d(;rI*65sR-@BTsqs>JRp5Fw@%A#U_Z)I+9U!{QlLEdxJWZOu zBpV@gy^i{+F z$(;S3Ub6U_-Wq8yhgcr4_dvRy((!lV?*k6Ik$#_)JBhzfYB!1HNzsc~9xzj+-RA)= zg3<~EA`GoSAjHrL1Y!)WKp@D_3Iw7Itw12m&}gjs9*ZtbrqXLOxvpe?GF^x+P9}0``-x{SFFG%{8fyToz1hGj zvb3xNsDqwbeF@Cz93M{Sc&fTR+X(5Y|B8F+>)cZ}b~t-guQMxE)s@~}d0+@5td8&e zsjq+mtlj~Wx*`Oy&t8Q&sx^`WwxnGEY@L^(m7;qo9WWhp1E%&AHo+Q|bt>&|{_^yr zHp<^`=<+^i;eQvAMPwnhQ3768r(j6bh;IcOHn9$YWR696$0iy#Frulh+O9q{ON{pA%ZT0V>YQK}^E-RpL0b?}-SAD@HnFxVv z!HpwGc%U(@oYWs^9=~PNGo;mBL?W~((cN~*!gaGys>ZsPZUHPlbQ%88b zaiprc(q!YtYcasA2|R&KY8guuvT#|_ZxQdr$Z@C5a(=&G^fkvmF66Y?Z zt9&2+5UTb;9IvASn!A8wl?Jc+!b>s{g4aKBKMixdHnnQROX(ip|L~FLxqK$mm368e zoa_Z0Ks9*P7haNy5WMcb2~7ho7mX-8FQt2Yi*H5M*7KGMWpN0cel~d37haNy5WMcg z{d5#h14}xbX}~THPxtpol)^Q>n?8o}bQITfMWa*C1AV>6N6C{^YP2;`ij$_wwBn=@ z8?89$Kn1NhX^WXwoRl_c#YsITtvD%a(TW2mh@``2>71&^>xrs+FhE=({XQVDkjj%H z%Uj?=%~CI^cfmkmAeU^M&+l?8WZc5V4ouhL(OlI^UGuF~#2#FJp#eCTM5GQfw zkdPQPlBjfJ>tXyW#`3Q?vfcR={^muEmIa1cCM{N<7HerSgvG`v%Hl2Di)WX4vEHnx zbiL~n_*Wdvzv8H}uLvxQz@*o(Sl28{7DHG(d=kcDADeVFG&}Q&UJ{k=s@#1v_6^n# z_%W30%c|_ak_gP14T*J4qGU0I#A#@kWxbu}lOu)%7Iltpl+xncC!#REZP#ZEIzPi- z7I^(LvZ{Q?#B!HA*j`2rMl76=x}PZJ(uZBbf)@J`9i9zxV4zi3~xK0x-prH zE>HC*z3ef*`$-hVX)L0IV{{~X8)kV~3^PRaVTh_Tl_(7#Cmx0Q?>gQpHqg1R&LY9g z9)0<_j+&YbNhI4)FSb^)C>)v6==5W|uJaZLs!b9Q=BRJsfOTnzgNYXL`bby+|OEQ-LHXoF&1qbOMn zLGg?@>cv-aFHUBi2}Li9N*AXao6xVAz^5I9?ar?VEQ`Picf(>`vnW{%Vex{)P%qxW zSzKF|QS`E?bj$kv&Dd9jZeso7eq{kAuqXmIlMIS=jiO{R1jRXcarZPwF;_zrm2PDJ z3s3If;AwG3$D1N80{7?)i*?PSWHE%r&K4}8SZg_piB_i<+iB6$a|p^;jTA1E{bD+* z_RC!Dfin2s>Ems(wkD0E8mYq3DqY%3(kflLk*QTWFtOHXciP!XBwNA=SWdfp^M0lXDoF#ZGXvrq6Gu|rI7o_?;pngpkGZ~S|oLV0@+ zpM0z)%x^jB{3cZ?=D@L+1IPO9gwPhh1M}F>BP_ceC}uZu*pyCZMn8?IZavGB+k&z{ z;cs#yHzLsSncP-=Zqsik1h&7R9vk`wgKd17!;WZcB8N@slG~kkqOcvyU<)_X6_VsO zl*St-vDIZ_lk9}B)_51{uop3hjm*e8Jyr*7N>|uk{3ND(Xd8M*fMb^>54&Pbk`-g*-}BeBNwzGs9KZN>mb<4RXDr<+j}s!#;~XmYILe<|5A5b z6{@aDrMvnDx|8|I{fR>Fy2R!+`GIUUlPiR0jabmuL_X%1DiIAbr74;QnZQ0h`Pu^8 zmE1niQM|`%%=3@ z8PNr)T)q%JHP>ywMYlN9lj`#F7~LO7U_WtYeVK0v&LfMYC(q$b4r-`Fc z@4S`=`_^`6BSXehTxlEn$pcZ1pX9^(4XsX%gTRyh@g_ahZF-WegYfi`@u+t`&%HBW z7VYVHQo8!m{0>y(9?Ab0`>E2iG7mwGQQSqI^1+5xC{ZsXoLq=%=N(k9k7_rx>~ zJp<9cA80R_w);h7rF&1Az4`Z=Q0kLF38(c$1#$H$6$#L3moc zHyZ6n^5$v6kkdQKc#12X#Qizm?)7kurpcKzSPC`ROW6 z&6CnF;v4vQ{|>J4HCbl_2m((rsoga^)eldybr7DK(bhi4x~GP+SWlOoltvNX!iOh+ z;n}GzaMfPi3?u5fdy=e!&@>s}7>)8?WJOu5C*vumwDolD8xNj>+=f1h^AszK0D;I( z(&mQAPIZ)>B$}nV^dme1tSyTGLEy>$cr%Tt+i8Ss z9fYTk;@*k<8Xj4GYEgR2X|E=fr!zQD9f7O&?2k7*)h|zybr7EVaqryBd;3cVoxQ!y zQ#Z-sYNba%fBa4qr%4>A6=e$ho1S`zzTME|15KKYgTQpcL6~obE@!idv1P7F#*)&T zo?kp1)7SbUXDLP4I?-R!zj=vf+ui{;1{)V3Lf3 zz%=xBEX-SZo|mRKIDM0>%!?@{rj=n-Gx^0_e>mWS+7x&Ju zah~Rvc_$rDO4;dK7Ru8*xyDD8X&eZiqzGtgJukH$P2Uugtb_10eGmQa5A$9Uq*X0y;EGW z4uaDh{MCiScXc9qaEd8adw<2dz3=k?a9u~xc(Om<^iFl#JIU5TczR+K+KX)CJaw0O z=Lo73Mlq#X#OZsW8e_Z8LC8<`#~YsNmnX?O2v1){y)*P3){8Wj4a#hsX!dcHUR}Ah z8Kvnu&i~r7>50EI(QFk;&w7~N$xH8~$v6m0`55M#3m8gaJt!$X5b?{SF?DC}R5u)^ ztMX_}Wgds{rD@V+9E77gkHH+%`Xy`VBa0fGzDZW&wJ5!)5J1`{iTVfoRG|AVDbW!CgT7s zEqn3c!SfsU`wH?7&^gRE!!xw7q*UqccO=HX^$MRq3=md&H0CmeULE&Ml5qf*me?$< zItukoi?I}$P!{LO5|i4addFf4PiGT|J)GzVAuCBwwwbI{KUqmK4nop_lQ74$?&eXx zw=BxrBoWoDyPUTH)7| zJqkbResUZG{y0X@9?edACj0u6?leUm&>+zWlFpX3HPHwH&T!BOlD1E&zDPPpL?uWn zBB=xcTQD>>l9D;q7Xh2IGy&CqI#-&v6G;MM1Cb;kI1ot!q63j6AUqIB0^$RaBp^T# zNdh7Skt85Q5J}P&BjlgYY-^(VCi`bwG)1GhwiqGbDj{1vZ81VVi%j-S+G2$Kqn2df zq%B6sKO{}{O)c8$h!$-zLVjzSZ1uF&5zsSzG_f3d_nAnN6eC)#`yXm+np;ITzb2LL z>Ko`z<|p?i6Wx<~3;liJStAy-H7)e?NAjueWM?85cl@C?Pk*R0XRFhd8|d%kf2!4b z`hm8lGpqP>{fXTA?#!n2B1# z{QTW}^4ms0v1!n$Jvx$E5Okiz{So_ZW&(7SZrd(>3PopMj?R&-!9fR@uo!e|kB(#( z1f9>}{&+dBf`&Sr{z!n1(z_4)e-}e1^mDe$)Ks?1RNqMlnA#X}YEO=27KEIKQFpX{ z$|LzgYI2mWDqa5!N{*Ii#r-bVfC;!Er}pGXW46$t(yu??5w7=xP?p$5EGRN@u#yeikL?XPlgc0ZKZ+{-8-ZwNE;d zSrBr5hWq0mm_LTwoHdZ0b57m;oehOUr^}v0QE6eSMVhlt-NyQQ?9}gQCKUp0O@51z znp^)x0NcNQe}yy`_WvuS`I-MBq~ztl2&p6UUxXA$-U9p8XL*Tlu~@XdFOg2C(mm1H z>F((4u0ru2+tShHy~*g(WTG#+DwFH$F5k*ldvvh%$osLq?7#9G0s{@sQ0Q;HeXiCO zG5g#;ysEL$lgXw=+jE)DL}#iml`o{aihi5V4Eyw{8G9PaA2i|@h9Iy9r%3i zz+FRh4!lL%Y8dAYj`P~5g%t|Xu|Gzg7khBY-{`B!d7-n@rkAU}m(vw_l?DzEJ%u{& zWFC1}ya^(&RH&K}o|h4xF6UKx&eVDab>1tu^Cp{vUp4^uV@%|&-^i=9EC2QHqt3gM z@8@i74ZiaNQwZa{)jRKeFD=S&x6zqMub6bneJp6a^{V4&)|3?P>I zE(54keU||g6kWbE2;8IdUkE_D|3U!N{TBkT?!OQKcK?L{wEHgvz}WAQXm1* z3k8x9y-*-A73hNlkfbz71rnA9sX$QDAQgy88l(bYNrO}%E@_Yo1SSnqfyksmDiE49 zNUce1f_iKEZ-rj5(;&4bu?c$JPQ$m>BsM{>-f8&On#3mPH9QU9T9eoWy^=@&tZX>O#HN;-)J;=>p03|ZCpjfL zC%GkK-`tqnm&oT+UG{~I(u}F%fQeEZP$^N01HurcIG{G76bA$zN^w9nL@5pkHI(9j z7(*!z2rg+#%z>~%Di0`jNaX>6g;X98SxDspp@mc)5L-y)0l|e-9uQqT?Hf&v&hECa zlI0V*lzN6Gk_0e7Bnj|-B1wSX6G;L*o=6hl>qL_Dc{z|iiM|QQoGCAvQ-NF{X;O&- z@+FlhAWTq+0^$UfC?HT!i2@=8l_(%oP>BL!1(hfuSWt=57cHP252CMvwmGOo>5CT7 zh6dGF0SJ4#DmX1VHM$6fx=XeU?61a5e&pEJ%WLtrAIIj zwe$!E!j>MvK-|(J7zkW?1lKBZL1l5bmykITu56d)!M zNC5&6ffOLx5J&++3xO0Mju1!zf(C&UAVLsG0m4C&fJ&fjp%4SmPay`tpF#}4KZO_| z08oelA^?RLAOui|0b&4!7$68xh|v=Tpb~)KYoMtZg%~|i0GftTe2tzc03G0^_!>P? z06L;e@ils)0CWhK;A@~$w-jRZ)B@1CT8gjHQwu;RXDPl0s0BP-6rT}Ykjmu?(NlBX z_A|c2ePP9RkVX)Y3}^%ajzA*_NC-57v^fKi6llH(C=O`^0f~V|5Ktg`I@VDcSfEoR zk_03OB1u4SAd&<`2O>#8cp#Dl#0MftK!6~U1VjiTNkE7olB6w0Kz$_5H$kl=ktA&~ z0%|CUzDZk*fZ9r;Z_*Ycpyrb3o3zCUsKuoDCaB9KlBBJUfO<`$Z_-vrKpiL1H)*RQ ze7lo5(|%8IZ8DckcO{FzxKw?w>**$Fby9)6r9mp-pBkhB`AmaUz;8831@fE*sX$$< zK`M~{G)M*N6Rs6+uljY zf)|x2psZTwrTJ4q80h%OWC5XxOcoF@$YcQ#gG?3>GRR~BF@sDN5H!eS0a1fY77#Yh z@x*d0%93z&w*6w;Z2JbW{qq3!*viO_jebH5!Twiy^46RW*z@SFy0F@f01DtA< z4v?!+I>53<=>XjtrPr2spmI1@W3&N^37z7B`a-98Al&H`55zm2;(>stQ#=syGM+p? z9z8jo+0>Wp?kT3#Y%ZC%f2qxWv_s+NSZM_U?nNsQa41@VfD6$I)b2DuW~BW>AQ{pM z1o9uPKp^F1X~74w9I-?ou@OrIf(@}mAleX11i}rmL?GS}O9TQAu|y!^5K9C?4zWb- zF$dc6r2R%{x06_+_Lu|haT0%{_Lu|hY!ZK?_Lu|hXA*y-_Lu|hV$yyiw1MfZlxvAD z8AzlHskK{*Bg+-`iPc=eehe{H*ivy?Nh=VDIJ5$RB8OHW5N~J&0woTuKp@)C3Iqxq zT7f{Up%v&`d7})zZ!MLwMBiE}Wr;wr@fN_9B?8fgSRxQ^h$RB?hFBsHaEK)W5r)o+Aa zD%!Z(0@OL4elsaHW%#xFL}#iml`o{aibqcJnY7)|RqWwNu{? zo+6b8L^e`+KrKZo4~S}{@_;(ZTeGOjJJ7n(3Irk=tw12e&Kp@c23e+8GpoWd~3!#3ER-o=k19fY(UkF5+&7MwcOvTF4=&ANkZAO=; z`jZt4Q%#Zqt7?)ABoa-MYiDO5wP^ZgAjfEu3?v&(l7XzFNivXrwrGkZK*_61G*JBN z5)FhgU7~?Frb{#s$aIMYBAG7HKq%8C8i-}OL<7N0m+0C>GpMZgHldgnJteWec*0{@ zsz2M8jLu7TrSkT<`O3e-L@E!6Tcq-UU_~kqh)|^RfUra=4~Rvi@_+zDDi4S{r1F4J z;O$fbZr|GOZoSP|DhnsnMSaD28Di2stQh7k+CY1*SZc=$b+$NO= zgl$rJK-4Cc2Lx?WdA`MLRo;E84pjyERvoGe1VW6rtfVRsh%vMRfgnRG5Qs9g0)a3? zD-eh?v;u)ZLn}~sr17shROHaV>QGgn?nnbwhonvu*Bxn~v%dYFwwY!`7bg?B;(^KN zf?TFQI)8JbSO-@gj#7#P9EMUHkeDdN0nS1x4oFIr;((MyDGo?Tl;VJNL@5qPM&8P7 zMa+S!msB25^peU00t=};AhM9k140X_JRr7^$^(K6sXQRMkjewX3#mNK@dc`NDZdXY zbxGxEjxSK5OZt79;|o;gl764&_yS#LBmF+j@ddidM)`fvMK)4-n(G(n*Az*=52#=I zJhhoK9nV+Vk3+Mbv*IQTp)?>?6H3#&l0cj${5GHpB$NikXhLZ~1xP3jh|ivW)i2)6 z5fd&XoRaBI_EnzlXpjoy*88UO7`22XY&YIu?V3Q2Sh7Idv{euVXxlC6wpI3R+g_yCjU+#ev``tf;hQMAd-{I1j0GFOdzJ` z$zuW#*y)7=(cRM>y?Bcp{T5_$QTudq|3Es`rS^+2x_OdN+8<-V-`;r$A9=z?wyH43_RwVWMm%lk{bX(Kt`SX?+mvz>j zDQoS>S$n0zv-WNZU((jJXLmB2FaCd?KBBFuwTh3-+jm^Mde_(~FfnPJ{re!rzmDiK z(C{#(%R;70*T6`sCN|#T@5yW~^{hSfn`~)NAQVq`S9F z%5QYj=CyC2+C0iv>srxu7I;u+N@uDNO>}oBa}`v2_6by_2zR3+nuQgrCYCV+&Rf1X zn&|7vdQ+G$jS?f&DZ8PZty#&O74&NP8>ZL$m|mkr zpW2;8%XWA7DHAyDADD9172L~`&4NuAds()=Y;PjknJgrt*<2>GwnD%<@?})9$GKuz z(VnMyQ7qjZwNH8U*>khl>j})vmPeLn@xocUV|y6vK*_d~Q$H z3--(1_Oj)YnF`RJISNzC`Ws)b0dc*&B+$-Ws**A%?|~}wBuC&n0fCdf*lr-#)0eRQ zODP@vGmNTrJP#Q$F=Tit4!aUL+iZzkAz#64WA;Q<`T?mu5J~S^O8Tia$B13ghQd+k!HlT`K!4(@C64vch4@T{_@AKGEHM%a7vM=_i*mT-hOt*tr z=(5BhvBHC9$-Y#7Dm^@Uk0$c#l{3>b<1qcK5AZOSY!KFNi5Csi1O3U|P7M{Crtj~M zYUdI_yo7#Mc+btG*TyTvkk<}CwVKG+Y(=xMX1lA^lnPqyI1tsUk!!VFj2^qI)zk`F z9XB4+Ds%}0U~IdvRMiXuS)0iWIJ`d|tDw~}@4~dQ&f!^LhTmGXRM6_MgHf$4Zows@ z1$TR~rd3dD>>;RHf97fx#PI56bg?$kpX%FE$#}oV^?8x$6E65*yplR`wd{C%BeqJ@ zn4gal*T~BR!}fP27tb7u=@a@HQ>aPIJiDV%rKtHjs*tsvF&AkSz1vFmt#|s@Z*F!M)d>L?^q`_T9Wn!C~MLo7Uu6IJpua_Ze;_pja2O#^m2l#klCEbYQsmB-FwFeR;Ta0lBV8q>>6 zBGuP7!0KnEX6W|GsA^wfNJg^lLa6Yf+PX|EOgNUV< z@o=Brq(yrY{mCh6nmrsxHM^XHFtb%yG%xdjU7A%0t*fS@D!B}Wy>w{yuEX#dspN)% z)W$^L@MKP zos8N8flZm*`n;NY(c{o%`yr>`dNIOz84C0l16rylJ^a{*Qt<*fGz; zpdy>;OBGL8SBM-hq1sqe_+m8=3JQ5?i?TWUN^ZKRfnl3ti3@YpiHHFOEJu z{I!o^755a1J4QmGX%~wl9&cvZ8%%5vrd{ zR;uO@K5LoscAZh2R@EM$o01oXPDXX@<`$mXA*|&Y9wIKJtDWlm z6i*%8u^3hK7;f|nM5B9&#QEOD`s9{MkuP@&rq2m1w(KD;n-?(hxoB^{VpH(jQ!$)&1wCOE3kkJGl|B84^_V5|Z-+`Fwwo+;pNp z)2CP(9laD)FV91AXI4;ekrx;?r3ziW#aywz(!r6R<4Rr432ki@l=3n#E+p;WYU#>k zhpX^PmC=dI(Y5*!S8JiDmY4FjkV__`U3tZn6L2sG}DQcIr6X^mX+rcJKr5L)CkPt2eJ*@Hj8|)v2BxzN?^^>EBp`>ej(^n<}o` zau306=NQl~OcJR+^@jQFU6_KQ2bf(C6!r76c02X63zC+pYE{zgB&ynrOtq##!K}wu zv2^S5lN0IuCi~d)y5gM#>^{MV)}ne`!Som>y4Q)->Cu_VtnW|c*1KzRXb+}|bs|rd zt6GI6IiWgL?Bg~4`Ana?CO<HIjt946PMH9v#MJX zd)uo&@2OLb0OXEz_YE$+%3nMRfQ7FTBekl-g(ny08t z-$1$oJpVye3EjXPWxr-YlWOk2*vryy9|Ln=kp2w1Ad6Ui624f_qRLQGI`r$V$FsN| z-{FQ?AjXnvX5w~dklZ{u(Y-NYpHJb-wh zwUph5_OTzvRQ0asH=!C{%{9DA%xl$aSUkd7{Hc02KT==ARa;OEk7ay@MLesiCrj4t z%Glu}Tc~~oKXw+X-#vTU)vAfJQJucak=P(cjp}vk?ksM*l&%?7 z3K`?hL3O&3*)cL(WTE;v%RDSv=uKwq)8(rA{SMdfLcV6*qJGs(WlDBrL)D#EB1^Y} zu}Xa8`RJm3h%egAAt8WNb56QkDaXY~cf~>%p^7EA*KcbV6su;oP_9@)yawg2-uEuX z)H{x)r#-}&QB99mu3Y(D4|mPhUV>?6eUe*rj;LET=N1IriZ?#oRr=?pm`VkN_ySLiaM89xz0pSS4SZ zb|>)hf(QxPL?Z%DnSDhr4RO_%^CqD_gK|T(G70f>Ais;jY^ePocV< z!h`I^qA{xf;9}KVG1aF&_`L9ks7@}MlGTR^`^tVUqr6hL z{s_}*9`~(gVV$ZUe6CMtHuWXDdy>ZtUk_2%F7sndyU^84y>T6adesl4*5{NpD*Ob~ z$O>`K+9;Bue6RBAJ@->|^<1VI)u&DS;i8?t`=G4a2VOu`bLlia zsK(y9R^OmF;Afa>q30N1t95mQpjI`*pyIi-?#!n2>s1(mI4 z*1p|TJZU=oBSI>EaPACL)f87Xkrn*Fo2o6*B?I=gtJK=z`hA66$mHxZv1vO{Dy?c_ z09ADxbJ_4rLDTnpT2%YDm{=6IpCs=p+TVWK+I9#fMdy7QRdkT8X}DWZw8xX8Q!z!O zr*5<#Vq0!M2d<>@9bZRPp2t;QEvW2mZKp*~NvtojwJcRUYZINf^Qu-}Nzt2ML=|mg zP7sL?`J(A`rs>?Br-GQON}A@a2e9DfvVl^~U9w#fc1CnDJ5jYDm+6nrFWoU#Thb%m zg(~_hzNCv9g(W@RlfTY%L@PVCGOd*s^e=NzJwL(qoFgXGY6g>hLF2`Qn(VW$xZA-r zI=lFvN-B1|A63!iRza7SrHnW8)e~aCDVgqMpPK1c45IorbL>|NmiOeM@t7S%?FKeu zw>ra@AgVI`N3KNmJe=tnSw1Kz>g^}Z;_}Z#+sUBIeuzgk{%`m=s_mm(+pGvwuO}~1 z`UP5*Lg2yAVM@RaDnmJbuR71&BJmxMfRpS*CAeG07L}sJ>)>K00Sh zblLDvg{V^WleayH<
tlliB>nS)*iOxxG$=Fr@u0Extt5VSaSUaX@=s!Er2~QBz ztmcL#_kgL<+1++km$zTBwcC*@H0Ba2RqV}P*;zQ=er(&1cO-O!i zqI3_Gxq;GxzV=~EUF)6P@>^O3jjNeN^Cg`gUA*&+qTwX3mNlbaNA;V^n3_>neG_Fgc& zvDG-A9R^og%*QUov~#uHLAvQQCehC0V>7k;x~!EBq|RpHUU9uW*KUh{bv}k8$qQu`6$1ze}X; z@IA@?N0&u={|C`U+rgsXenL#~V8fz5*-XZM`P^NdKcVVa*YO3qYRDgj?3+my6*}Vz zY<;XvJgRm1WU>4fdq}mp!V*1$Yvl6W)P^cD*G?Z=Yrh$mpS;+9+Rsj{wx8#t3yQBS zY>Z86p42>fRkCw|{eK|+_mh&Df{VYk458~8;_JDjSun6Z?cEl)=mAX*J0|1o?xM>QE;5RR^{U^i~ks^y>%Y% zGiHeTuk~C1;e=Hrdy@UiX@y9$@Oo^)@yn~#G;w8drl?aiu_>9ie+e%-Wzuvz#4MV> zxIMaJS@hHu%O@2!yG51fQB6ayu+)EeyWkt$Ue^{-wV2%yUeYq#`r0^ zejmM}q;T;+nPYECb{7BF3Q6)yx1oyvhc9)zxYT+ycTbQD?m{)cm1{n`Hk!L9$8Gmu znumVMvP+ZT+-WVoQXwt=?sJ&p$FNW`a!^n_qfK$Ql=%10W15F9WJ^9)(7c+#Tcvnn zzhqW?QpbL#pkQ}y#sAGEf8TU3rmVGy7p|*Y1$9$CF7&Xvotfgyqxe+DK%aeHu#(mz z??bh|oAWqVY+tI`$W~h9u1qdjN!2b~)sJ#jXNyVE&-^mHz-v>R?bY3s>MrzF()ck{ z<5i5j(T##bcrNIQX+x-wO8PqfeI3k(+KZGiK2qVmD7gY6qRkMl8q2Mp5qSoo0tSKT{zGsJA#y;d^hp)ALscg1#c6jeY zs4rZ@eIY6OLN#Y{l(Iv<%bqKxqq$_F_?zApQ2QjVv&+NI)ogmJ>0Gd1$*#P@Z+jSB z;YoajSBT|VHCql!+HNdvj@dho6@mLIuJ2=fM0Bm_5q>7MywsW&|BXxR{o~`<5|3pu zA|$4XYIdE<7C4*9X6?8&xxMs1qjUOfH<;w!S)BN7Ou=(mmAkJH!>ZY@t*l>L!Two_ zt_oT{^F2&U>u4UqCW@X~&7}x;MKgWf6*S!Y8C1i&I7_{vsjE4WRZ&Bwm3;h1sGcq- z3aUA`?yBef&GuIFoXX32%L}NQ_i}^J6|<}7LO>*ktEsf6pMDWjHuMQb`TnA=zL|bl z*Tt!x-a-WzUGNg7ru9o6K5u9iX!o3)z%?D#Q)y9O{UfSv7x#dv;;MSsF)hu(L~WIH zy!Nk{j-f9z!;fzi7Sy-!u43~i{*GyA{fdXBycjrri}32^-ux=6=%w5_hs5~lo4HR( zcBcmVD**K3zfm2havd*h5bRyek!#o3S<+F-G5`EORMV%qrq_vPujZWAhF)YlniecR8S+YN+kcb{0g(~?Qrex%Xte~amse~AYn3hUb zUoaZg^>_~ZDls1URwXUBe-5I8$-lEVrePb4mtoN>eM^)@{gl>m^*)%EmP?IP&C}ra z&l%ZwRP3Mkn6#z8&!v4ByDzHAGM)rmh6L|Be&?KNUBaHlPU=i1x_T#f*(b~==j|UT z?abIu8rxIlUH4WePidYqskeYC(sIcmgI|7P%C&pVilkn@=iynS+nPqtU$(rc_N+Zq z)*6n%wq~ay_VGk^BVSdKHL0}yBUIhV{N!G{-R3&AHL*9Us{cdzRCls7k&8S27<=re z`ajlLoC0*^2KqZ4e~w-8SN?PMV+ff6d#|uNxi&G-S8)6(pVsT_NUOE?%WVU*!f!9y z?wap7Y}abN?d77?oJqCS+UModNBFwRf(i}a2UPemk4d>9rh>1w5tO*@+kg@Yj?vU+ zri5=66%?5MjI6+(EbfhHW#ZXWrm7CbPsWyhfWiK4qyALE9<=QONYJC@?h|&w%pAh` z+wH@e?A6x9ZnmmwvWB^~po#x3?yIx5HTmjxt=71g3D!<9;yF*r&a{a+(-`hd)kaS4 zOzc-Q^4xfW0weE{y~r`tVm{=3Y3-fQ-gSx1Yw`oxY$i8+!sacJQcC{g>$(Z1d&hTW z-zmGE%|g-aDk}-c&x~Z!rMjjp72SONaZ=<7J;=QFKpuhFp<>m*Q@ZCrIc5L;`ew{u ze|Ex&vg})T$`0>u{MDsG+*OzXB~Zy2k9{v_fzrFS`#jO1!_Kv^h&q4`t~RSs|8otg+t+~;cje*1@=>{lS{3nrx{Xlr6q4i8+iUVZqv ztm1FkIIybPv6){ngE{_|Pu*Yrm&P@91%cnQAR4~KXlUdW35ov*x_lexlAX81SSaz& z=>!FS05qtbY%TWWYq3Qi$}Mh@?4NM?PWO%=ka;q~%AlAWXT^YsL>Rx59}tZ$hB zR7(pA$lVY4aA+H2ZY=NPH8MBFjkz=Ll9>xV!}LA0A^6NqnQWNzK72s?LygQy78ID9 z{TbPZEytxBmS;6GH`R@~w|y4S_m8`Jj5h&vQw?+!LuifANfs2K`v~B~p}*}+9O1)j zz+Bvoxsi7O%pJ~|o7foqL>D*A1)Mp_f&z1E04EOpk`-%B?F>4dvnkh1ckjmm>fT@- zU1VdHh&r7U2`Nw#4X55>JzY&rC~!CCHrb62X4qN0r_+g@YqER(R#{)`Tz;lpJXK;!i+LpM-0cRFn*4Z%#l3tGODOekR2 z40KDOyO@*I+W|~|_k?U${L4mm+06u&U zLvGB_unX%&Zn_(C$K3-Ur=hdcXtC*rxqve#Sx{i^^MDhNVa$bUk>+A<%>CgefH}v{ zwFf`V#SC%*M^3V!fZU>+0U!PtE7c;i2Wyb%rnx~kLt##+B!YI?Owmtvw;>A;v&=hjHGoYHlQ2cn%$@j$rKDISP-I>iG4Pp5bw z;^`C*ggl+%ftaULJP`DBimz?divv;5^FudXy@_nWeh!(v8ub>sJEqfPq)Z`nE~|la zd!cJsXD}|yhIBf0mO_l6^DDsFNbBExWYFGT3#FyNBq;sTC*%&yI)Wc?8SJQq(o#4R zls@Q_fYJx>Y0|Qqth5va1*Knul!hMZ(&*E2=oFM5{4}uA`?8e|FJs=}Tf7zIpZ_%Q z2*pvHsfpY@+0W&vzJkCXp@_Tvk~MqFAV2*0gzMheofjN0Jw7px1*}lXBmEy24tSm1 zRQ1{B+9<)b(VCb}z-!cnveTCRtTp3X2VSmkySy+6Xqx7E>XOndGnjw@^()Dv9B;9E zGdcj1OeX*q1F`}%foH!}OPIb~kLzD_F@V=?EYC-JN_p0QybARB3mD57x+=VztZ+T1 z7M2s>`T^v$xAWp~P9w`+8t8I8lfQH;K-XlhXmj(hqJg9fm^vD|0zsE#If1U*AgBEa zO9|n;Mz~yWRyS>v;fk@wYgB7+(E=vw2CYD#C0R~D>jA)J3(RH5G-xEt_44#x7s+H< zDHZ`DiEMD$0=5bbTY+FpvYfzH2r3Gn;ZB-HhA)eFRTvNQNJFxc zxufaF9nAsyb~z@QP5|r~$ZhZEfTcKK8oDmmOPTv!E~l@9SeCU)&0!#WE@`L9z~u*A z8YUW&TG)bHJhfWcSKdNbkTPBqY-~n!aWauhr_w#q1-VRrbpGZ~B}HPm_uDS>E*Do-Cja;j(28_AxxBdAq09G5gnwOTgE`nDB+FxzB3Lsaq-2|={{uOZ8 zDBo*aQJS&oNLQ{Si*NiBP;&uSb9G?LKr^%0oy}78H{i*8V##g-SGj)y4*MCN2)36J zfd;N(t}}~o0{DkMz`CokLjf%V?T7*_k1dcfdSf%ZomtOAIYY&QX`UjhDVy~fL{jm^47fic&i#Y@mez^h!%fx)1r zE2;Z6XBquC%Sc@X+TJR}LxHQg&^TZ(?yyVsr7PE|Mc03TtokS~R<8{zS8+472xfOB z*-hZ;G00)Nxx;QNIjn{Z$`0(1A|Zo%61dd)%O8^wI1MkU@lKrY2x#? zu5XU6{I8s?`WaLWCG`EDS$MwPt$5fLxa(ogS?vBf3+-kTkoqs+tf4EJw~lLMiHe#i z*LOzGvE)2uE#Tb8JBSh$wQ+!(OoplYXG$`gz|`*{ciqaXzo7;O74^}j7S}x07Xqky zmQ%H$luiB5RMgWX`>Rd)7r^PdY&U_cpFj?~nNI<`v?VLocY8lD62R(iPXG4O9)SN? zQ70?l0<1|^^`ERHvk6eW0=esQmIcC7SgxWj09^+aw?Ny~zv1K;N*OhHOqoFi$^4Hx z6iQ|jnEE&5uD5b`?cs@vx&Vy1W~$%*Q65xSKjoR~`k)qo_E(z%Fn|T1Y&U_cR^Y;| z^$6!GtF5uJ9}{+Y-DcxU09S{zlp2{>!r%XFMcr7*PbizYdJyL7vfTu(#{V90*pIP9 z5Wa@hfYj+K?m7j$^%X$Ne{rnZTZyKC)L7YHZPHZ$)0J#D0jn{u0uDQhVHMe=tqhF2 z&H;~qSypp1+W{EU9#pE#6p*C1B9DfW-2|%s2)L_dF{;9wv?VLo8u0!9kfE|x@KW`( zY*4Y1dS6om2B8LwOLh~mx)*TR(4{O5jP1~dmFvjjJ}9N$$l#A$9oQPsj4Xngtt7h% zSWN^j+^%L+jc(RORm`;lyby zu$an=zzZ9QdMoN}Md{ME$yR=3EA3_zlGV>4cl`i&*P*N~s9f&^A03vVO0z<>p*^5T zNf%2EQb9mUGMj+ZR=`=U`*>kBSKG9r#r64$NpAr4-&mebmr*4u>KU#5)j}O%u<`>d z{dN<$x(RUD(Bmu%j4x#Y`#Lan`R7X2`M&|E`U=mdJE*9lt|JUoBu$levk6T72y)k# zcoHa-M%Egja(x)^_EB>AXU*mrH6B!`r1#}art%|GQ9~t}O@M0q9x6s3hdAukr-Y&Y7Y<1lZ#G6wJf_1B$<6 znVR8dBB;nKbLmA9LzN#?>9?DJRqyM7zkZwh>jF0WTjT$W6)q>#x9x&-k&J+u1joQlOM=tGr2-|)`$gdP1I!r&_Yp*l+M-Af2FiLN-t8{IHDIRovff2 zDOH{HBBeR7A za(mVw4#~eLN$*4USVcQ4&^|G>J>^FvHLR@^hu&hpZ{P2EdNDJ4ayqlAFWKFbjJBou zkKLF5yuIt;^?|novy3r(mT@H2NH9zN%@VHWxuxBIZb`S*gmK{aAZLAo?FmJ8ls1Lz z&0u1pT#v0Ue}_!eQntAk$r6p&i7f-PkA|nZ=Si}fz|&^{SG9h_ga4v-HeV;^$@S9W zvHQq8g&tu*9XJ$Bpn#sl09E%uNmdhpx)}1+o0zZeR|@|0`N{Qe{Q+;2d9uF4ljwEL z!2}8zpctU)9w^Cb0#F|Uob|AsJcUPUN>8pgKbDP=aSENwkl(kILjAv_NL=p$BW*)d z-P0slO`z!xz*nvR^24+nTQvdYdO-bOd&xjqKjTH|LaLL0#6$s`n}(>mCrYxKK-8Ig z1J3#aFQ%?9btm){pX=%Lg$;m`|Jj9o>2PA;aul$|Vu-4Hq9m&cM134`*7x)9KSdW& zu7}en?F$gK7gzF#j$rmsz-FEys_u!BtR@ik1;AOCum&Yka#cG+o$33IjLQ+){;>c+ zD|z1B+Byu-Sl^G(>M!z$B}xr+TNANFX}6YKqBO=PmnhBo$R$c?kzAtGERsu<0@Yh= zpZ{#H5jNfueQ%;LkW2JM7bns^0||S(e?cY}otx?JA4sRV5`|PIUB2lbbM3LmE(OBx z2YLN6ld66>PwxttGXMk+gV)xELT&SD$dRw$t!=WjnPFEqTG?~$#`il7U~fHVZ(2!Z z|GROmb_3Fx=_Gc)lc=o?1@@kY9J!y@@RwxSG+8Hm_JMJiwj;g`@*10&3StVz@F5i2Ae(T%A{d0SnSC*6xjP8el&6zB*L%NLbphPn$w#ttN|D0JIRCsci#rwIJA}JyRpr+;Li0P zapFvwyU=4y;qh;VG7~slYp@Fzc9IDN>>dET*!n5Y9t%s^L-%OKHQ`;eN(RrG$A_91 zQ6)UBo3K*2G#lxG+enveD6ltkHQ>lU*~MNf%XvBvki}e+-pXaN%AuQC(%WxvSY!Wt zJgw+SA=982IP@eN3g~?r@Z-Ii8;__-(u=!-cU?jT&$@&sy`fTl9BlIJ6)g0ZH~ z=qcM!An$*WBma^oy+x(INq52Nx}z|(9ANL=Y$Oy}{$^k=23%`2z25HkdOF#YZ78s} z-wN50Pu`jJ!jYO(o32|5F9P~nQ+TXsep8g2W=~;vZn<~nl8NrF+(3Wlq~1b*U-5^c zTCH&}w>4=6PqLu^-n}bj@IpHndHXT)bnfN3pO@~H^|c=2881~Tko~VVb?zxhM{P}; z=SQ1YQxgiZ zU-PiHfCpyJT(?`9FFCBByD_%;?dc{kg7}8Mv##B1RwVWMJrBp`hct&0$1>!-%!8oG>#uZmsOJq1V|a>x7ce+PhO4 z;BuXfk6JJ5X?>Ot4tl9Jr8Jlbs9Vfw!`(k^SPRPuaHS!q-OA_6Gltkej-IZ#>lvqC zYzF8$ou{s?B}Idny6o>ZsVk7FOSYUq*NIzXr1=HBDC_nl9G z&h^Q)r-6NL>tC!YjI@{LF8+_=^BO+euthM5uYutNv`%@?@$xp;c|0FnR&rUpD5OWr z^-;ASog?S1(8a7K99xP2K_m;f3u1;DyFbjJCrdJ%Kvv7Svd3Dd@T9eRh|MzS$#Q*| z{@akATX+Pxw4`Sc(E_eH8MK0cmSi{qt?B0hXszLBb!7FS<@)@!1)M1CWjSll;DROH zUoxGRADtF`&XNo#fYk#yto1qGz|3uCU}+d*xF)R^Qa}VajVG+s>vYbHIoF193BU!OE zlFu>dGgsVoknuuZR?|9|r`V&~gS)#Wb;O3PV6bI-tT5gc*!t7kfWz=l@P=zcY5b;V z>)3Sy|C1a**Lbezvf$Q*ap2J!GtJojX$E~=C|gdTs|&bl5_*xPuBJhKO_%$xdHn`D zon^85HM}XfnM0ogXd#<>dXax1t zlVmx8u36CSsF|Fun9hlYYhCz5pgFY8;&owHsXYuLUGmSTncNkO+$CF1pzDNQIVA|U zqqMYKG1q~{_W&)ePF590wsi!yEVRE{Xp0S60YOW$oPgHx(EiaYJRb~{<^=zry)%!G zyQ=y>^95#Fmgfmj3W!JuEt^i8r7K}c_eQpKry@3!%ru!cnHgp#U05VQSp^M+Qb0fv z5CIWX5KuuLQ4thH5fu?cS@h@gi0A_@sQCLP-`-oe=aZY{J9F;jKHPtJ^`);@&iT$c z?{m&Q_uPApj6G=&NIrB94FTT0QqOddV9TGk__re6ZBHykVFe7Qp!ET8Cyj@Ch5U0* z+9O;)KO3MmTjs6vxJp8P3z36BE^U<~ZB@W<3R=^8fy=%mvO%<6!NDnEn7FY1GdhPY+cO77Ju|} zXjxhj62mEEJ=%83x7Aw;z2}9?Mz0ZRi=Qm*RZmZ%sp7RFyT%fIOj(PcEOK9vD+ui< z2#MhovW^97n5;-yRYB*?NE1VQp8u<;Y&~lGE&*4EkRL2^D%^o(2Ux^#3Rt^=k&L_q zE64;!dhZR|E32P49TbDl%9K^$Dh3Ud>*P$hL#rHURl;%#TTP%laD-rM$DF7J6C8)y zHCH-4buM7(Dw(C4f-H4JDK$EL@RUDGk@~#3hF>Td%rPMp%t-QoDcNf>@ug%k!-P;$ zj+zh(y4n0TCG~-cF9lm-{NWm1n`1yKh)D*df}mtTDu_x3q=K+yKq`n!2Bd<(WI!s2 zOa`Qa&}2YrMPd_u|D?Hw|69@hvIeA9BsS5r2}XY5io_-~hit&E6^Tvt6^TvgeoF(t za7AL1-=Mi!eI2(Ed^d4ucfhcc z2MohoPU*N^hMe|x>9n)uMp5{~@x#_ty9GP78PHV}JH@e!nwYj=!w2pWe7B=(R1FS= zx12)Pr-9S9h<>(z>8tmjub)!VUeRzh)xRA4>#D8IT&M}->OvdpLTWjMu4jbXh&U0EvK+`KXTg7%Y@KrGIMD=8uu;*bR{KS7xY?77k!1+(Pal+ z)N%@4HK5#T6PaslP;42g3jOC;HEXVukEd)!dWD{Q+KU!_h1Q{E2U^r}3R>RE92YFbAZr-^#-9nxKNUkuG!) z!qH^|UDR?4U6&!JePO%PRtvhq-#{_k9dGR|wSNV>-H|Vg6uS=>Tj7sW4gaf^ZGKl3 zmZK`HfZ-If5-UMEcuMB2e62aMw3pN#dJ5I^Q~}l)uJsnbeZWBqm$ORGSrssx0@l^Q zVI!{!k3Bf>SihCQpRu&B7R>xG<;wf9>}1Su9pJzGQXG<-st>p?Fb97wUz!fo^!qrp2Vc~oC1`e5MbJnDVlq~N) zxf-c}uneQupqj&)WgET_lQ~@ZuI+r7yTDVL1h@zappY z6;2yX8iY$*6Q0=u;5tpzgt3e4U2ysDCA*C+D^K-P%PDZZgq-$c(rNp02IRHT|Yi5*k*Kk`tl}NVdd%D{ubr!q3qK!MuZ}P^yUdr`u!>;Cu88)`O#ol%XxRlhDg*KFhLp~e)3FNd@(rFj>n#)}J zmxJHG0MKRN1xEhT-Mt(<-_qPgEvMwJtB}(!l60-C6?E}WW@rybexVh>^%;?4qgOCx zF8;v`aNDO#T%{+j3K&iy>pcnJvDXNXjcpE=GDapdw3pFb@DZwKXNH8BVP$4U(mO+q8N%CCmG^)N`J_Xpy6<4lO&-B8F4Y zx)8W*{9_)JISJTQP6O)F=v6>^<6b+Lp4YYr+{@2a@mi`Trji6 z99r7r#@DT)Xhkj()!<(Cl0}ZJx|C%{$|8nS$cmxP!29GnYoo#bp#F8!-9ALo@^;>y zwQ@{74F3KgdOXCXt-7p2EG*pB?D4HR5BnHge3z~L0mE*6$B;&QbA-g zAQglr15zszo9OclCSnu1|C0X~rsI}IzO^E;i5`hC@~st#P4p^hBi~w)*i_!vO^3bG zs7P$04T^W!JEP&Jcc#ZrNUTj~I^uJ(-MOyRhWOl+{}ky6pBb(D0cd?aCR<;pjlP}G z;eWO}F~M+RBK)E;wBeM-*9FLB$4i%8(j+=x{Ag+SLNEISF8e8&v^uJcp{0MM?ctwLJr5F_H=eyI3$%M2SQdaaob$Jx z>tu6i>0h(C@L7Ns*%T;$ImnI{I{)M9!P4u&3K&j7>zBx7pBIHz^crLJp#CwN+kXMb z8r9e64Repdy=B95tujAZ~b+QG$yFA{Dj3$mG_B)cRtITnwy{swv0^q7!ay2a|RxQj|1jtGBQXUN%=D-(vNQ3`NK`7!E3StqzP!Npxg@S0rFBF6$exV>9@e2h3iC<_X zA`-pa(?CR`w@8@~T8W5+hHcp$3C+&)3ndZhUG~_)+R*E!+W4HVM4^ywPjtmwx)Pa; z|M0=&cRtUj?VIm>AYDRzL@eya^fC1zGQ8jGu7qK@62`2}l>Wmy;HKW4vRi(o?CY_T zr0taNGl7!iwaaEiB2cmXopM&YfU3>SN|Zh;Vb*2}OBWzVy;wSGG6?$qqy#HV+Wz>V zhX9r?5dHD!OdHb@SQa>xMmb8vW(rD|BR~C&M5&``z)`tT(heLQG?Aiosx0uhHkw!i z99Y#ml14d6#AXUf7XUZ)?iCR~HYaC17hq`iK&@o)-~I+juXjsvGnz=oJhNTnwvqb5@@x&=6?_jg&^wX}%RjuRv8 z(f5T%P>j5#67-d=7Um}bV8GFlA|XXrFmc4%wIk8Z zC8TmBB(pYCNIDDn=@QXgj`bP=Nt=+WssTuE3Xo#!f|ObQB@(MuNPKf7je3%Z%@mS) zkfY9&j=J2;rmVJWx6eV8r5XXd7bs{MNnqyJ6?dfjfmwc+l}gV_Mop%G^eyD3uShp78fo2W=Y&>#0D$y6nUfXkSJ^m>Mmt8t zWC}(}Fv}JdYxy1I5{VNd?O}?Q?*|w?CNa9o#-B4^o^G2OO=IRozq@(-9fUcj;)frz2uAl_TV)*U9D4>cCC?KEo;Ho#52mEp){yT?4=5#|!JCCRgBhs_nrk@4ksS|SailHGD~@!|c*TLub57flyvO@>V4cqS zbtD?`ejRA%u{L$d`GDpcR(T+mBi(*!8dI+0b%&xv-N?R=!x@p7|flRcz>rI*Vo zrRS9LlR^xokn$XI$qwm~7Y6-XmXxAB*z?Q+$DLr2}O&fSUB*7RCZ4(?yjeG8(fAF_Z#I<437SdViM<*>%B+fMubgJr3}fGpJsX z{}rT6sI`U^u>0po8I7b6Ln)-3cqZ`3Pm2Y?c)pCHo%g>i548HI;AGEMYd8T@9FCLG z$O$o&!pR?iM|wBOb;_)u$YIGS+L8XC2A_nT2vI`f{x34H!y6B?(r}-Ys&S(vcJO z*P?Q4Uu)*_Z$S&Dt0Wtj_YNqsPuVB)%m0&;MBZFeej%WxR(>I1Lbm)u$d(_+&qa1i zIE0Y2#36(%r#OUw@LqnylNL&I4MRs0jFT})0n9TJ;OfjEL_Out% z+000Wkbf4V$Hlz$GA&&ch#ta!tDc$Cd0j{}DF1)7)<5_+~ zfCn3GQ{dL!fo{7<-45aEpYd1%BzD&rD7`UY$XE(C2mc3f%^jp`P7GX=3mgA!ZOtO+ zUwZ=Gz9m;o33l)%ml}NV)x>-`B)q#GZ!)CN%LyVmEYXeFN+D(`+Cfap z!YolOnBi%hX|v4w-Ds9E@f1XF=Q+Sn-`I5J|Gv`TWWc+(dL?0(VWk& zGS?i!m1}q`K?0b=X7pl1jHO_+d@P9QXGm-=uHwZ;+dJET7ogj3rEb^s4d_<>5{(BN z*E}ob)Fg>KPlR#8J*GA`MS-eg0JrBL$} z@=WrQCD%TUc9Q0j_fTjeSBgf(xLzh`SXP*1G{Z4d4ra{QN2_&G=`IbimBP&aXtL{e>6o(v$K+y0dk*cOeE>7hO4Sy$*u;!G zhQ`tZV5td+trTYNK#uvetjzL(V{$R0Jx$ZQA7JJ!soFZL@{Bu7V-GXLRtht3AjjNC z2J=IMj;_zl(lU)NYEEjNxj%&@@@9D8#Ok01~ccm#ovz#|C61Rg;k zDDVgZQGrJg2n#%dKwRJv1Ofw(AXAZnp5A|#-3%2vA5M=SpY2GuC%WRZI#QY9h#si@ zdCk$}fm-h~V!|ML1ygS!9k#k1@>|{IPz;wt88er%TKeF@)I00N0%a%JU*IA~+m<{3 zaH^a4NkL9*bFF3QfE4Mv*`;^0xzHizQqcJ)a?ht^qhWrqy7wCmOz_lehYjkF0`NR2 zwVNBDUH-jOCTsW~>++60d54-yf#-81qNQMDaZ&CQN>WasjkK)J_-a@B!y_T(MVZgA*~UUZ1L6m<4D zl6oh6agC{H^RI%_>=%9f13)_icdaoMZS>wdH}h7GnKxtRQs6m04&3t;x#!;+EXWy) zHf_6LhZ#V-TA^KZxn13Ydb0dZL?s!ri*3wVmUY0ijmtWtH|r2{DeN5INCSI%TVRg{6%k*bpiaBF zbM`@0H8PZKPttLFJEhFe4|?YZSC5UDO97`Ejf*XitBI3?nuw2`2JJ0VH_f8#c)dcq zXwt5vpimgF=vEkTxRJ+K_VzE;bC}&OH@xRrv zV-Y*VTnal+0PoyeIA^3)kdxvWE7so8cFWNe9It6R)`4>6C#RF^TOBwI;Fz+O!p-r= z0Otfv0w!>@x3u-mr@BQxCZOJnE8{R(0-Tw09fHz31g5N|aI@b6>YLu{GN|{+jKegD z)1W2ira4r*W&uvL*Ro}fBgdu|f$y10fuNxC-nrB{y_aO`_ZoI|^wk<#ge{h{1P!*q z1I`4yv{QQ8Vd}3n5OXQ;><%Kkw_dvE^cLooj-?tI5?>OcfiGDO_KgrC>8kGyPQ*woMX&f3Ofr91Maz0 zRz&9q?#T#`HtoC$Zcp&OCAGV%$uf9c-eJi*T<{QcDe$bC0N@!TWBVb2d-B0kukEqD zI2CC3bE(||%QBDuu`cs4?CUVWL(QeY(=?5`XXG=YY8%g;c+bymVA(MdPBlO$FBUqnHm(jEQ`T|h*pzjMwG?o^fSmII8Q14>*K67fK>rsE zsCr+NDe+3Kq{Bpxy8zTCa)`AQa`Md}tp7oV^on@ zT&-4)TFsQT6mFgauIYVDa+8u7hwGfE_AaSEqSN0;OYK(FTDHz1w-Y&b%E1oP=tl!F zmjX}nSQ^*8BW1Z(*Cdk8^0HlZp}Fky`V)aR-xS(J=M5+`sg$G-U&Ce!T%=TPu3?fy zCPvsKfel@zZz3BiY~Msi2G}H#5|B+2X%4bU0>u=QM}UnTrm*Q>4@tUtT%tgf;1UJG z1eYifC%8m`K*1#nL<%lZAXIRP0K|)3^qDHX8eQ5>ee%Ri$55RWb7^ z-Mq(v`<@_%7Gi<(`bQf~p{8w&A8`mptyAbaHrVVc|BjL=OMwQ01I-@Li1`%I9tF;; z-u~%>)?+5DYmXF7Je+bCc~O8?lVjR{+FY8L|{;vbua$dOoX^^%TzT1+ME|BbPl*K{aMBp=l48H@=^8 z=B<-Md>6IaozX}izztZGW5B`?X~cR8X*U4pJxL^qnCX0`y#;0AVU)ATCZX$I)plbB z_Cj4kvnHVt>nWIB4}AAOx4CXKGT^!9vYGbi?bjy*%5D>sjpyov7z&4A6V_2?4Q0f7 z3T1oNQs4D1l4aw}AVru1O?y`N%{Wk&JSSuA0u5}-I?}8mjaW}1EjNieZ{%u`CH6KA znsyIuzllKC-w9n0w!CfxTk4K8Yd9m;Q#ku5@Llg@nbD>+87>{Qrx_3Z008YV0a|sH z-3v4@0O>%p1~g(l1+-^@?|MIxo&CIN(6omi{~KgqZ-Z2}-SU(M#_Swv){sW5r;v6m zXfSv`l+Jsa?Cpm>9y#OymgdCQnP3SS`Si9N79Bs3!p4{Vz?Y%!2dOriYYa&y{V`*b z!EmObZw6bGh9r}wzp-DNq*-H<$%@sOWD@XpSmMnYmkAm=<{5tRL*=wHW5LjG9a-Fi-s* zaMOe3JVtM==$UYWq-~U(@JFEGG^ydaTO^=Yx$HRLK zfaITL)vVSU-T)w-E;XDUXjuL|G)AkS!@CZoQ4JC`nF7)l3JBfK;dLDE{G3 z0Hh0KX_suZu(Si~G6&MA2FdqPr5aa2D*lA|8ox1$M)w6jxt{V!F|ds$LcLt){(B28>27B7(HX? zqj%dV9K8U%)Z0ad`^D8J5|Vbt@ZQ%bM_yEFxX40~>OiC4C8W`vkci0?koHBZk9ERL zqwOX*()JlHeHn1{ExCZ37m#jMp8;Jd=r|hP91V2~z|>c$n|6t+sY>>}L*E5A^ahgl zFAIMe5{2JRME@fpYPGMP19PelqEQVIv6uqU zb-+ix%`y|EdQCJJw5jOjKTwLi)1`*(L9Vo`xj;gj>voUwZg(9qnF7+LSAm;;L}a7r zR1+L&v(a(?k1EK2fhXezL>-B3j-t^{5iyxU(KmpTdV9&$`-YqWjQlqTG&6Ni{heau zohvwx&ETr)7(qghEV#0+^s>&V$rO;5zX{xQnRL@B!K{tHIAWaguGhA^Z~Pn3@L8$h z0yx(hexzKPlaGYv88L8A;n|*zF$xr1?_A^8(bZ z>NM1oWvc^ew1Y%UrhwFo-1JwnAyxT7~#0p_7~zRu|A?E zQ!x7B|4@v)$K*18Uf`y@yRh0*M`!FvAE}RT<26pG#eP8zvF6m;WEK%_sk z_Yz2yUl9;Rx7e47+?hR=ZZ~7pBNkH_x)b^6v%*JXSDS!Hdo|D*{{|3s%W894i+vQi zn}KYjh*(Ua=q2E!k>^Avs;L_AQQj4i_BN9%BA}f6rlkC;YWpA}H)FVRZj_gE#9|6W z4|&u{BaaA(_6q{N57CE4*iMn}(8JSunr+rU?L-yw4wI|2WdSt@mlDjn6UELoi4Cg? zJ-J*qUyL^HFu%FRWI4bp4eaSMejDi6GfD%EO-5;Chltf~B;Bz}Bg;HiX(S4*{F1s7PC+uROA-*KrwJx3Mh_~dkZC$DELVWhdZFj`` z|LaJuiT~lpK?`fFY+)V3)xrw>xW-7oop`i6OHle*f=XFWS(VTF1aRIh(s`TYvfL19 z+Gg>>tAMhdLfPmvrZ$%$Q-K=Ffo2V8#Ci&7Uq-%riRd;(Q=*4#hO>I@gw1QW0?v}M zpL_w=cz_wu{BL&|?Sv5)PQw^ltCaN=(AIyBI&WE@Y93Mfbet!>+Trro%p*shSHgK+HLbKRMl#ss+Y66YODvXFh`j+ zlo9JGlbl;ZvT9snwrbSBsq)y>K-VQ=brx&rvl=w|R)?END91#CnW~XmPXTS} zHPm?107J&AGNNCZ8cIPwK1u12+YtW;6hVmJ)p2FD`$afzRg<~uiBnfj1H0?=> zo4{JqTPlJ=?EGLgS^j$(%-9O;c)KN_So9d<%zrINlh7{O zN>%mFkeppyWp_p+i)mLm+EY2wcdiu9K7iT_yNGHmc1Ww)(y?B9BI6d+w|QEo-+YVJ zr1{^jRQZlHdq|_!Q%E}!ZU3&2Y2y6AdClcC?G{?$GLX|Al*(?&SxuUIwah;-W9?ow zwVp!So5*>ar1PFy|?nkFH&VAFNif|O|TnbYdggMbO)Nfpb_gSpk4MQ>bu^Z+d&JmgkL-sM>v5d z&-spM{z`kT<+R&?uAbC&VlPuhTW{#o9cK)jRmgY>W}io{`>J%^<{(EH!%Tbe<+b+% z%r2JAkR(^(XvR|DV5jTum)_m4knt4Ec71@lZsZ(MiR~$qnVB)i1}$de?xwnW7jDNa z$CS;??4tSK?$Q}UI;)iR6wqEk&RZw*S#8UJ^BU_pHfZa{)4u>nJ6kF{qtl-0UyD2~ zob4c)ehhJ;%|EZ8J4lX}`DX=J{xM`K62ly57J){rr+{{B9|g^OSd4H)uQHo}^iN&? z`bMDZN(owP)qt|LZY;Z3uWg={qSjMDOMM17Z$vupBEu<7{}%RLp9acK6Lo2{&Ti01 zi^*j)domimqEZ2E3d$1yAu?KQL$%?QrmY>n3{Ky8y9&@^hve)gjZ`0wG>b?h)>BA3 z`%@rGoGhJpiX7rFvr6-Clht%D-g^^J_7oWw&hN9EG}nx0X%m`RW7B((71Az4&U>*W zZKmPkQUBWZ_{%A2v$nNvqEV|cqmMYdgl11dqt;U}dkA>0_gjhC!k`*6HpQWTkNcX< z6tiPRhjiy&yE${yQWmGPA>Zg;eFb&h$oGZsRvWGxr)Upo-U3D_y`+o@Q(LVD&Hr{c zmT2u*BDJ29(8i+yq;|RJxj-f~Gs9Ba>#BD7CJhSSa}u-#fv)B6?wgqeAQv9F!m%8M zV}*>TVD<~-y0=Trt_X@TW7AUFOLV^Z5XCI=iddc<%#_W{*vZ|X!%{U?$ao55M}LDl zu6MqKtgmXob&V~~8nl<3Zu&A%cB0(vTiVCOnVEi-|Lv}7w6BAZd=GHm>m_NM zTMUz?y?pWBJAkrlq_P(Vi^=krjfO}g*Y~-)u^e?{rL3orb_lwYW{Pm$=(&bL)7}Yp z=vOFck&lX|&7OAmK^okZa4e;xA+{? zL77|rd}gM|cj=4;vkDncp)3zvccFkRY8W!@^6cTS1IYO28_k%>EzT@vj2KU0?CJ-B z=N>E5!wR`JGt<%6R*XkH40L@}=o;-|>VcS{jFcR1VOEZXS%r+JQ1&eH+;fEI#*#r1 zX0F$$-8lR3eE_qCa^tK%*nKE}#b~IucROb+>8w)LQ%HLrIqy$J2PAfNwc!?sb{1sK z?SQo33(}&^ObN}9soW@t&7cwMDWFABpZp%-yRkVz|IS=W({57lbuR#oJeFZ?N^|?v zHj_rIr;v65a^5wvcuWNKm`_@Y>mFUrF_`h+0%YwYm0J?59~(-sPj{l&xhAn;RiP)B z%jS#G#vS~Fy=Llq&|Wknm>iWd^~K;UtC=qb6WV43gOg5X1e4Z`DI1f3V@fdSahUn3 z!J5xZ5lo*vF?OL5h+T%h83Zpwl0oz`BpHM+Ly|%KG9(!UFhi0-1T!QVgfK&rD;C4* z$x_f%3`0-Fn)$WsD;C4(eH6xi?TW=PdS2PsH&-l%)mJQr(FYHW{n{0aVGR|FVGR|l z!{{SmrtIur7fACr!$+p+#Ag`3r5)X zTb0hVcl9Jwg~^?%L~>GRvAZkisT~*k4;9kMR9hln|F%EIaGYV?2=$Xsq|J;sXI7{3 zsZ4vy|5to|I$tQpPs%4#`S|hKj&%Fv)P_{PJzemBlGKJ(b=rB#JI+6P%Px(v^jmj7 z&^V^a+wJwnvFUkoD6NTU%Wk8wPbY!fy>(IgTNf2GpR$#3{sq)|i^6&LkW(n8K-11t zK6xqC)!QUD7-t0A9_8P&Gi58V@^GlxgBmfPg4!{c0r%~e8)I#zVXN29e0AJNwT+xF zBEw#_cBeJ}+g(~?D98+jN@_lZwby|6dMC+L(GYAx`#a=@5}V&K)oht}TuE8`Ka|)| zyVxZ*YZ4nZp90&2E!2Gv6q&-4V_&9F)6RX3yMgNJ8F-n{RA2=C7l)cXs1frisQnYU z@2zr}`3lo{txkJ9=YyXBustHP@VubnwRgb=wtgLK*1+cbuCfMDVB5V9xNl6tw%#;s z_1f;pJ=XzkpOLUFsI`2>MwZv^I9xf7!&869gju|~|Nu=XY2ePeGw-fS4IdTkHnt!n{X56NERT&~rb zp+x3?yxXz1cgLEVPl4;?PXhNnKrVSU%JrF{s?on|O0#1<<{F@Foz(W6K-==SLQKKt zI{T&E%G;LrFAr(Y_jI?(OFx>taj!SmRLpz|Y|kP0t&)Y<;-J4^ZpYfcQceR~VJn60 z|IpkcDLY)%Xi?QjZ$ngAdjWW_*CXq)>9wY-NNqdh=O_(Nl96E z5%VdqO-4t*?vk+0sWJ>(owkZxdJ#=*-dw3|eUs(=5LZRoTt)i6tE{*c*uIC{_e8k} z+7xsg%vF(f+WyFu7X#Sd60k)VSPx8ijumbG9fCyRlV)i1T6-e& z`p;8U4;8BJ$d%I!4P&_jp)8nH%6bZAe*&)S{g=#WSIP3x5NFyWYRTIuXWoq>p~Yr# z^&kz|%AHWNnKWWOg*5MW;Jgn@=bhVYc%kNBMX%{VPPqkWd$82@@E|T&+wMpA@VkMm z(g(5(HMOYu6xc3A?pr6Dddp3xH7#owo(ovJRjNvo9g6wsbN z5BTn360~L2BC8pyAGOzHU5+NAUz5uA1j?2_sTr~rxn{|A`pePjuaxx^(tZb=H}b!t z%P^tVFlgHA4L)=Shzfrc87w*2F`^!Ce%z-`rxTHS3Z_rK32?p3T$5j?i;yJWVL-|R$I+>w}FPOHFpBG zNE$n$l&%$g&fZYP0aj*)B!dAkLz2l3ZwasCjqK_OvfpJ)G8rK;CK;?q4B44&l%S?#| z!OWEC%0)B!=#8#+GR99EJBb6z}XZ8PgB+pX+X8cj)S!!>lYzm&s z)qn5e{9Y2VTpzIXKA=g9DXqWR`v50>QSQbT`oxYFBSqTn;==ww!PBLJ?ZHMs`L~N1 zZ36PXqi9r9L@cII^dxf9>C#D;2m3$%ZWSX%+6^iX<)TAnF6s>ww2C6PCsoRp$Dnrw zFvdM%F@>U2k(1sdDN4&RrcSmBM|*B^(E%V8?Jo6O%vI2>W&B)M&oNwXW6)p`zeo`yX5eE^{+WFFdD%QPs#XA#hdcg0-k#hgKdDG04cE{Y!K=OhrHW0K~r|&%!A#YEq;D%PF!i{k`N}`$@ATP%N zxk-yD6upX^^c6SyDtdz=+G`> zCen7{zJV0HQa~PEZC@e+V}!1nE4`Xy412_43Ph9k0}xG=)lzSZfkdS3!+jkoc)Cz9 zda->Fxjvk2+d;%)3PgLLYVLc&M`K%MHOJRg&|Wz9Pz``QD|_wJa!e5MHRiyyuS-Ow zCnAFeQxK{<7`W)~M6DDnHiY^F3WUrEi93Md1TZ<;|O||pRkvV8apr1_ufs*{A`mg|eOwPI@j?5~$mRvbna(qP`F_;2S2XIjDhcd>`ujMb|wD$~s4P;1fq6BO^}noz28s^4YeoRCi${r<+@L0oSs9K$I}iU{O;3#?u*LE3^<_@6N_BoQZmF}+(xNyyZy+K(C%7SNafor^ z{9C3p%k-|j0dW(Ay0OE9WxB;l4qdY4I$foAx{MJ=4X6ehuEoUpU+>CFtIJAaK!vzFkOzNWjIBklG!LA1if#5@6u8J2MQeN?`+)-; zGne8lN^y0>fC_N8A`d=yn+HetGS8cKqZkwN12(xLe0i;JhD5eYXbs+Ke37rE$pFgd8!Tf?VBL;KDx10QCdH|=4*-hBXX zPi@a|eN08AF?)gSY?t6lPjHp9pTb+!zSMyuSBOQ?xS$8&-=1nLy=kvP`8Uwk`>2c# zGptW_>6)4wLz!w^y`GjPa(uH09`VK;65gx>p9-xcmz^W8o1@ zqB-6y;*Ho(;q9}?fnSsfV_~h}&0NPso8S)oPXOFWa=E$E{w5`fkZ#b#u0a!GKn1zY zz=b29kQHSGh|{LHt-Dg%Y`7@lcF@-u3A{}sc0Y?m|0JqC-z=OXdJa|7@ zQkqL~+7{)w-Ko0X(K0311xrqwi30}?Tv1tiQE4t@5(6s4oq=5VZ_r zwoAF+e*okjl~v_w_E(jzOa&*yUH>w4Wi6(S9HMfxSw$PMpF-O$zoR)E4^I++p12p*hz* z>3tNg$lru|6M6^KJFt{y7QB7USS}-_M{|ua*<_>0pls4IHz=F*4h_mCyWs|9le}wC zHd)acluaU_LD``5X|7354y!cRm=g{nn>pbiw3!nQVw*YPAh?+m4x*bm;UK)36At2= zIpH9{nG;^g2uJT!d6(TVme0iQ8oLDpU+s$Y?zl0>epmI_DzTi5 z1RH<;#tA=H+79m2O959Olnv*TgH?I?n@arl0ZH0zm6X0!QUSv$U|o(}_5$g$N!dtZ zM@xTE;Qf6Rt@O445DoaZkSkgUxh`!vy0q+EQNt-*RbNj1b)raEJ64PKEIU>Dy8z#~ zjB4q9T+*LvvbGWgYb%$qY)Dwda0*y|jnPWb`<`^!(^|}-r5z6a@y7vLGo+r&d#yza zbPgO^HlRfer=T_G6ToFJk}jJJT-L-gt6n>p_Gyq?y~E`kXm6jbX!)P*`WH6#FR0-Z zwC>!siS#eL9?@}&O|LbVwX`GF?*Rp2;hEvEY0FU){ zmq}}}!Fo`8N#@7Tr^z+)Q$f~2_O7(RZnY!J2C|6Z6tZ?jyF@Ex&YB|0%5v}PXb+m7 zwFQ8+MSwM~kEs~U^MACf1sQ5V^LA6PY6V+a-c=H-rdCl3vSXz^On${x6f1AD%&RG` z{sp^zz@&}KSEc7G^LA6XdKEeBV(G9eg9^*0Mx690KzODuN#fFUx+ zl?7Z8yD41#5IAh)CQ)CFuQisgwD)^Gje3{wkv+kUTrIX_;n=kx!DUVW-qAW6ttc~Tg3OsTySNu zbTP4y<=@JvIV5-bg#fMVWW~CgYl+2=772b1Ej!R6hEve`Ida)$G8;^(64fAk)~eTb zFA`S*v>IgyIM>>&<$tzo3o^6?*<&I#oPyS~$Ynn+TsC^SIkL3v_H(WPWIZm`+-hq% zSPzc>xrJ8g3oU-Ih~X5le)8ZkboS!2vdro*R}R)|%fah51G2~?1{Mxq_@C`^mX$e+ z7*0X!AHZcJPm11PwK=ltv^nd^ivU?M837jc*_yQc&vs7RG#;wAlBF zwdY5ceUb)%$eALw?iui3{$h}QKZ_hfaIL{|v@md z+>tDnI*>gc61yo}jREH|yi+7st+ir-#a;$#uar6AYQU9&H)h!F<8G9(lPh93g{xbE z!$y81)-5%y#<0>}{=Mc}fR%yK3wEr?^$D&Fw4w|oc2ltWHgedX%5}@t)q<;5hI2`p z8K&vi0g%Xe;K8NDI(XAJJ4mE{Xs%(GNw!-I$OJnG?B7XRbOyc?Onb6_C&`rtzLTsc z4ag+H%YaPKN96A6`Tr1I_QLaN~ZYgL`LlfCqNN72jC*T_!xv|yz*igsRy|JBOIxN8kmv^5ag zZc1z5C&*!Mkq$e%O0)*pxzcvB&-yXoYLjfZCr8~5(Qxep$|^TFUi#p8=+${>yD3~9 z^AvE{@sg`UdyR3WooG1hCqT{nC0FUz0W~emS77DfxUzsNVmF1W?;(f1RyyqJz+p?7 zXc%q;&|cX0?Z*IDcS$unbGC8?nplo23%DY7Q@BbzP8~LKk=S|NOJbD?yF)Zgth7fl zs(wb5Y!f<;4TdhtpQ}z||75qcVmO9j+-wR_cLHbiy5*+pg+U7NGZi;d+9MeE{+uG^ zJtwuix{oPQaa#v8=3SmDJx>`on*!DT&j5E#%DO61EtXZ>P-(9{n)(Z%<;NsceO%Qa zw{^hAlY?sXLq*J{K=m2mu6v5IDiV}c2J)2leESVg1EhA4{gjDZjS6n0z{ZqAYV;#T z%%&jq{+|MOCD$>SXj5nt)l*MWH6w2b^e4116@TpASrYNwZj}|gRm5%zR(Jh~`l~lj z7FOp4g_Q}cw8`q0?*lc-ylweA73^G*u;;k4f-7P-g{v!`01kV;TvjE6L|}p|ZT~Ox z1EA*rNHvSSw&p7^yXdOG(yKuBcu4G~aP>NH*bY%%jSbw@1XbD=;LSe-RPDHZY1L;d zRjvnE%E_5QZ((7lir7t|>L9Rc@fK{a0t1Kjx#DgDYVTlq@CAU?&m>k0s+h3i?pT0H zWtXc;&sE0Fra(3O*VJ7P6RB!PSpae)rJYuK-*Z&Q{e_NR(EBg{s*2k(WaPyys>-pb zGHx~nsJDQ#dbi6gb%m_{xRKHhtS^6*BIPy8CDjUBvlKY@=Cai2&r-x}3RIu_E^yZq zBvhM>Z<6t0cSO^n$q66 z^5*jZsVOo~EpM_EDR9`-<*Cu1r-<1Uq1Y(y7LJ+%55QErdf*`~$lSH9+ z*_b}MPf^iX-yOU z5|ZJ6HVKGsRa5$^rUHgj)-@;I2V6EMI$P0;M6ZCKENuhsg;s5~wF@oKVRJdlf}B-H45xtA1zlDwv|{H3WtIuF{0mz( z<>1{Qw?_UXBEW(Ad4DKA#1m9f@JWttg`Yg#ui$2+F{q}4+1?qrJf0U zdl;nc>;_)TG4RS?4HCmCXl+I=dy{n84ufcEdl+xrOVNt_LnN(<)+Vhwu=(kdmIXXNeRiO6zqd%kG1a!fXX*E*(`CP(|#fP&FUTG(0CAwjftm{QFqibT#*DAYEM|v+B&Ifvj47u=qPm zB<{JEpdBqiVmJk@hmgyDNuU)?i#m{>EN#jfdp{uS4M|oDS0Ttx7Kwe1EIY^|hEvG8 z8o2C%qKEAToi-!;!`hDEx<>$5b0t{y)}}1m_8n?E-UT)oGK~#0Nmq z`lU=-3xeco<2nnR;B-mLf~4j9s}d6xw9ZE^OCB6FQ4iK>cd{0I6`)m+Xic|wq2;!+ zO1U>=(A#F&vzG6#dbA!yF1udVgYALK`e?Ct-L*sRuRI3G`ifLD&9%~E-^(JS(yoJ1 zjt+)-yD2H_n8$&`{yEc`XJ;#)6ipC07U zno>*x@1JEi2Pqy(xO7tU)4t7@e2j(5fw^^EF&O zwO6H#LsY#YJ72}>(biEO{#QGy7=}#Qqaj&UDO9DA!=inf(9doT)5EkArY}yX?0d6i zvO2Gc39HbL%nbXZ9aRie88@2()$|#_T|X+bK)Xy<+)!y}OTU7&{H|;-&ug_5D$sOr zBL$_86mUaD%%(s!b|!#|xT{|Qazmv(bABPx@(_W3beW}0fu5RUYV|h_X{H+S^hzGiE4yw@)6)~Fv)sASf5|>adlj8*J z4L~hbJVd4 zX;0~W_(%ZNKc$u}y-ckN?o0)SMjce6A1Y!t1*+XqVfDOp*9C#Q8Yus?hx8Ih1E@YL zwVYjTDO6xS*+Dh>p(18ep!zOy*H@*xHaCg#Z&lbsdV|mGXwK-}-b5+dAo>z}v4eI5$h*+A|;%#2@xXIS4@PGD!q_mmPsw9yV9c?nx(Ev0|p!ST_sv{|Z-e#%v7^lc4rQ*2YGP;Bukz7`8gpFo2-T>j1P5Kw?G(z~2LBj(kseb9HTna-jC0 z$%!*4fL>891>1VL8Zw3v1Px?|pnV7uGb#vv9l3K_mUI_N22J;4v{`V{bP8d5Til4U zB|+0IQ4&5~Gsenh46&oa;0?f=y{`xcqgzB8G>xD(4@Ra?1ih=J(wn)`plJw60Cf;r z2_dng0$~^O=SSp3*1RCA_^ZLwL-zxx##{2)wnSUHD_tn2+xzze3)z9a!jsme^6{nV z?$k&&2LJIXO5i+EJV&{DG2#1BVLspWLK%9Yrt^rlr&u$Mda?JU$Rn|>t^ZdN3F$_; zL}FzkA$C*}$z{l)kCTa{K_-&$ku=nliS|VDSB|9=M&1(1WFl8GG2Iac6Nj#;VP8{& zm{CFSz$WU=o+lBU81(4OclNahn6Eh+DE*dfEUsfqgoYc7V7A#YXdi>bj0%H09|PQ( z9LO@yp!U4;tF=Jsmt-+|&Hoh!!9==a&^`u<85IUk24jnn?~AIgI;iT*Bd9$ced$pE z!9U70*u&P{H=G8+sUnAUw&3(hGLG5AcJ3;P_{8;R-9mLg< zF-#yh@#YA$k3eEZg~0Kkwc-6uB$Vhn0buBhc7}cEM|1G{(`dzdmJAaY2VvqpWj@x` zTw}TcLn;>YDv*sJ^D2;olIB$)T}<;TkZp1EDv<1LUIns_Hm?GSpypK|9X($K(>wj( z?2~aFP<%A~JD>n*S_c#%P3wR{q-h;UjC_}!#5N8faXh7(?4C;76Y71~en+Jkc1WbY zK7L}tzYV}YDSmt+)6tXYNcB&Rw5JM%5k85z(_DI&kC&92AXB+|nd=RGx*H8)7!9eA z@f655BiH?xblsLdv1T)anf7S-#5n-7e~9H}Y+bN&EB~1|GnN81k?WC^-Xp1y@f6I? zMy~rs*>t$FMPL^Co$!%lGws>$r{@96ekUWs!d|AH7c zn0C^C$!vgFP9PS&$mT2th6o&CmJmj4rx124^4lK>zm08`GyO&iMeQVivKb)uoK*E{ zt1Cq?Ip!F%gfU_}g|Wy7f#VirN3qRp8q-edfBbl=YUB-(a3{3boW{UdqC?CQ#E9(_ z#A0AEwuf-rNcDi*`eRf{rtMeFB!2EhN|ZNFW~~juPV@M(PvRTdOCi-obBz(Xr2Az^ zF4$Q&^1Wcc(TH5q$2asllZm$_7Mz4xMmg^EduhqTauN&Br|C-g`s8Fy3t(F)f0;;6dlzCoOEB|q|w9pGf|y($Lg35 z1O0Z8lrO5b4kG`1T|H-2Jx493K=deZ(#VfQxl|pLOT3v#+gSK^8bI_@3Hbu+7^-)j z1>4FwVljoGo7PYtT`3S6V_-#8r|m0Td@5DXJ51)GWftZk*H^HNA>Tul%0Xf1dElgZ z!BCZnG^8D({bUtPu1DnW zf}sgPrnD*%xu$|;#hmY;`eJT2*vULmxM<8o8q#hNjav*5H}L)tUujTBb{t$D8s&M2 zSWL-7n~;+}Df3XCKMyr%@1Mx!DMXPc1?1Je*41+U=PFUnF*Ldvq83vadLHvE!&<0D9WALip506~9|`t@+t za(os6r_$W8PwB%x1`VblbkauPq92qlda_JHd}F5#T88%P0Sf+AX33VI*0iZF=YOxu zLkxL{uM0#ircksAI4Qcch!G-fDYsJ*K=h2HJm1Gu%Q2b+oXd8ZsPs%^(qalkX8|XT zoGBV#dp7YGbK0K5q|*UIO>z_XylN(f_-X^!QefF1GH5V`pr0TY-6~!5@*vE!b`-R& z?eVJsLI=xQ?(iz>5OS?;%Mc<4QxN(Ka?xK&7hNC`;w$Cqv}x#fD*!_`3k9RInGoWu z3;fS@%OR_lL)2mlM2VAukA6+~Xl!n+DCHO-(k7yPmjVTEk?Wz3oP89Lz~(ktNB1Uc z9kG}~(Ivo1-zSzsUJE}%_1ebVE2{xR^Cjf#+?7XE)!XmiKxXChHmKqPI{Si z(wRY^=RJm}?aZB-0t!x$nP_RRb(yH%b>>*sM0^!EwU|QD%g9MDm3@W2z)5*2($;e; zl9ZwYWU6ekk0DYUxN2^6S9A5mVhTgg0~hs9l7W6@m4SLr+gP}-lQQI;Div%E3Qnu) zIoDaREfW!oDHPS9TMOPN>$$^%MF(pps?*kU)0P2>?v@mF+E>r{-|JRIR;`Mt#T1Hu zft>UXSRl{rxtX;h3-J-8y;R|uGXX)5 z3xcYvnYs#m261;Puq=lR8cZRmhO8q`0~Ez%p}D?=DG%}OH~8P{D6)$p zVljoH&mt#HNGH7@a8jQlzFJNT(d{h&qUVKv(QB9(;wue^LELI+bgzbp!4!fHSOh%u zhcXY%4~ivLg0#il7ta9t)yZORd6jhtxmwP$`C|W5P|Y=bS%;L;oWjUl7pE{VYQgt2k{wRYFC&9JoWe+<$SDl$ zs_@y2ctXWF}ZlBvSvPX8@u`HXXO z&2gdsP$8X6wI%ZPZ~J2mJKb#~)UP{{Uz^OX&rGhL5^qW6+f%t>I-8lC+Ti~?UGTp@ zwPBU^SjPcfM{U`qF_wPo?gtvjG*G?fLk6eCT4eR<-|pM z`1ZgIz~xCXpA_2~Xl4g4pcHbrj80sLsT5p#kXzm;-Ey;>sNl#i+8ZC1t);*mBtZ6h znRbFXLKawqIb233F2qy{F1G@wjBFNOIgT~GXpb^|F-vLjj+S-eieOXPj`RY$nNH5xX@r&00%^s79@2Vj2*#QjLo^Zg7PGE?s6kvXd z-0}?xOb_dNUppTD->6cYE2oU=n=Goxz^<;w|q0Klo-jdI@CEC(m=|VBx-tWDI?0{#Vv@Vs8FHLu+6gSo` zAg{fdx^b0o;|Nzhe3sGAcOGoF2m5oynv(UU+o%V74YFr=P9IY!Ht54>u-dKlN?+@l zjKbQ!#3P>tK0HP)yAI)M;~DhfI#Rv5Gq>e9b8FIvwKt#Ma4Ya(@(O18_ijx3FgQ-* z;;>zDSi51r=@#I_m&!OiGw58~?Zcp1;e5DsA8ro2q#0P9v?VfYCv_+LkMJwK;q_mp z_XtG3DOPCv1(ktU>y^DFyE^O#)%lH;Vm_Pf@w+VQv_I&{vd95VSr$2E{p!a&5}69%FNpD++P_=JJz5&Hc5&`^Wof>9c%Fc_tQAi^jOL=i@5AdE0d z194=1*jl$Cz9ieVu7A_NfA_R2)m@0s-WXq!O61!+{r`2yq+>=f@EtRPfg70-3_Qw= z;EFjHNLXgR7$h$NHPdth9radWk@mz zV1^`v2xdq!2w{dKS1g91b-tNzuCG`OLzny+`sRwoFm(B!p>M8O3`3U!8v5pn#jyH{ z#V~YnpqXDAT_R{ma>Zg;L&fSabkU%pU%O&;SVP5P7`=?JxyI11U9lL}P_Y=+P_a5} z3Rp{?7IqaoB|bZ~G3%dznbVah6w>Yf!Iz*1He>^gTO^2D+#*5A#Vr!VDQ=Mh$x)mKuF;f2Vx4RI1p4g#hHyNXjg~v>(EvX zr#Q1wrQU2*sRx*cKK(f@J|~+g6!VF+KX|(&QA`(BCkD>z$N{M#TSCTU1FIU74NPoI zHpn8zWLMJHKz=dy-5}W*lMOPDG1(vuoe{RU9dT>{rLH;Qpzt*(9E3J=!a-~^CmaMf zbHYJ%GbbE`H*>;4d^0B;1UPfTD;eQX+1(j7yiSiFpY0gfyBjz#)&Gjx+ulG{B*uhjGvTG z`WI0TtWk$tn(_$)iG)uWa1K6UAerz9GwL89q40efC_wpyfuzDG3>2ZE$6|(t8mw9w zrGezaC=CP=Mrk05FiHbqgi#uZBaG5OAYqgSA_=235K0)O8I2`qP0ROfXj#iB&1ft^ zi(AHTGa5_K5|{DYjK&hQ&}IBKqp<`nclo{zt#}!w8Ldmux|i|WjMgP+^~?BeM(dKY zp4px0-<`QSl}}~bQ~fWQjePAL`n2%KB!j$XNHXwSLy|#0G$a}LvmwbKPa2X8>Tg4m zLH;x(8Pw$)!){BDjORMA;5Q{2WLZ<9K^QY78pJVEqCp@tB^pFBQ=&mAGbI|tGE<^K zFf%2(a?uR691MLm>OYtgUAbt6IuoY8x^mGB^(;(%b>*TN>Smbw>dHkk)aNkt)u`!V zN;Ig=LLc?3t9{!IMdIo1Avcpb#esmyDGro;oZ>)W9? zfl@6KcHuc3%)x4tRUQaqtnxr$VU-6W3#&X3T3F?Q*up9g1Q%9$AiA*11K}kS`p{Ng zU0h`I_$>b#!dd>2bN@{azkR*5GZkN)@^3m?kAG;IqMAx-OmLZoROK#VMe z#-cucTqe7|E0yf%Ptdu1s^CA;=-(GJjF+AHg#xeR7Yf{rUnuY^exVg{B#>VDzZ4`? zexV?R@(Tq?v&bJDK$Ifp4uyY(ks5{NIZ9bHkU2 z4e>=iiA*uQdSidnb(udnoG<$Kf2NBYhwQiV3k8vhUnnS=_=SSF#4i+-O#DJYRN@y3 z3MPJ`ASUq(Eo-?n?BGz=rQEQkmUSsNY^fk9g%A3}mI|Vh0jVG?8ITI%k^!k8Fd2{v zB9j5BAT$|}T9MdP-lg1dc`ff!ZrD;Q5}V4qlpFT9RwOo+cPTgQZ>>mdD(_Nm*xy=_ z*i_!7+;G1YE#>&*a095DLO+96H)Y^)h(ueuD_tn2+xrK03fYXmm>hEb$SxB^Lw1>< zPGgq|A|bm>P@}QS1W}M(CaBNYWr7IEE)&#d;TNmJjR|1c$S)K`KYpPg6!8lMv4~$N z2uA!uK{VnQ3c?Y;P!Ny!g@S;@FSHU72`v`czZ9(%`Gr;@BB7Nc|CfS@6n^?{>R{z= z{3QQn-uTjVcWTJO*PwJ@WrNZ|0x~GQvUUe4$>6tx++BT zEq+2`ZT~dPl5}^jD;1xcYEKvZvkgN(s>Uu8#5Z=CAgHm+1d)tgCJ0~bGC|B@mk9zD zyG#(B*kyu{6n?Ea+?W8$FMgq*=;9X&LJ_}E5R3SQf?&ii6htF_p&%Ub3kC6rUnmGj z{6Z@ckx*&G{-x#ZXbv;>)>k4Tp^j$w%r;$Ao^!JOmFEUodJfrAWS0qkTy~it+Ox|9 z!Jb_vi1q9;L8xb!2_ikaOc3bVWtKJ04>u;1welRU(6UyZ!xaibQTU8DT%jNq@e2jP zh+im(M*KoSIN}!y;t{`45RmwVRw5#mxAGh&r{%3Yhby!a5eZeE?2c4liHL+|u1^cS zr8Xmed@7OeA5e|Y&u6>i^EM>@q=N%`OuJAa38E0YOb~|H zWmX^#p?aG8J5f2!F0%r02o=-p-&uh;gi2}l@2o%^LPrLK)?D*JbWdMFoqsXOy+*R=ZAsf%^*kef-&5 zWR?hGdg#YI>uV)~${~^yvdL7}&>bj4k^!HFBm+(jNd~+ck_@;tBpL8)NHXBqkmSno z3|5SxUtt?zBmoU9Q=&oL7P>`PU*}&gkD4Jh0wK+yKeIj9`!vLZ8j7Z=jKKel{QXC&s&bGU@i=y~1QtJcyen#e?u^ zQap&FCdGqbYEnFisV2pP&}vdVh_EKbgFySy(1A8Y*a7j@sC*E1jmigs&!~J5`Had3 zq0gv%5c`bE2f@#%d=UMN$_L@ksQe1YKQymn^83;3j#2p_{-r|aAOA|xl;8Kw4eOCn z4l#zl1_W)6uK}@{LktMP9AZGE857^Hv&5rY%}edsq0>Ziq9I@4X* zLN?dw_qMv?^RqpfF1IKm%!RaatTwx?K_d|FS+f8QmWX-j0*PU%IEvDO+}FjHTbY`+>$WP2O&=H;zqv$L-j- zOFG)PYfWrZZIkz(ub;9*bIsfleptaj>)hVC%1@bzNo)LfdHRi}$iDZOW2o$0M97}j zD`d|Zq3k&)ow#&y^Xz3yn@?OYsXGZopZpm3?C;9YzDkNdW`v@TZ=N&n#3l2v)JGo& zQcv;32e--}`{)r$T|95r+!F>7+VMRgG_(CzZWX`siV+H3o$lz#`#4p1q_W+qV%~p( zsnDLvq~gKbm74r%{>(W|<$9oa!X{RJfdPx6=Vkr7~dXa(}QY~J_5o_IPLPi^p9m+3-k z5N*aU0Kdip85-(y;^$aYqBg!NfpG>=|t0&PAn zf0mgtIkbfRS-NtaiFjM8n26`{+3e~;V#Pzp0XVxlawNMzoAvxgbJ4#r-TzJUsq7#)9k2o@(IY+P zBIz+<^G;hnJ(%nlfb5ho#8?UPNns7qlkezC_@1GSvPZ83KR~bi0WOwmgspzs6M5eR ziF~mzhzZ_A8eA_4SuOowX~-X<-M_xH$9IBwTi%B@UQD!g4VL_lI2HUFAC|hb%0L!2 z=d|ZH=KLR{zYa_FzfY>Q&4N`xo39IPqB((T*r=N9j})0yv6$BWDDAC4jfoP}!)t{e zOF|kb=^vl*LsxuFHtnCI(b?yjHlWI}Qk6RCD=R~)lIlu#r!xaB-FTv~cF6Gl`6N(i z7pc%85}##Z70UE09N33XsH3oK6>ju$c zW}5yIb>fHENgAUpIy4NT!`FZg-lv6I#%4)I%A&=zLA0n_Lw^bHTIrapr50h=x2qG~ z>8_1~8RExCe3SSQqVelsBp&} zr8|}?ez4#)D@(v!16$@6NXwIJa-z<=pkv5oU};TDp2Xe4Fbov z^FWRNC%iMZQ2sEX)yVtH{qAH=_$O|~Z7BdX&XV@IxK{Y*NZa!wW_J30z*I8hALJU0 z(}RmZg>Q;qB08^p8Vu&7k01?xCtac_DJlI&7%XBR)dPNnv!n{`Iq@rmZ8CIj^w&kn z`1&5dGB3pa^L2jRHxw#&ZU9PsP&&sZiAva&Yr3neM-+eBy8M7MsU}?_id9J`4XekR zY+Io-n~S&Sx&8W_bh6cS|TF<3wM83PtFj0w1*e#y!^alToY%1xy?~^BKS4KX9kC-aaQ?2()=aYSU6Be0E{Tg={X{mF^$>90cNqi-7!VWpv*x z-8F1^p3D0O5;Glxh(GpXAbuz56N_3){`^4vU`hYyNdEsy`8_Q{e%K~ht|wpU@%JH9 z>k|F1M-IgGL4J#lOTce&wJ=3=vrr|hUk-HP{IQGv#%zBJT?^}N9|OwF7RtoVlW`+# zOW)tEE++i5{(c40U$ysdzQ`&Sr`&lUi-HN)s;HTdHCZRubncVjNX}mw6w5R>O z2Ce^0xB{s5N$EeEYlSzrhKy@SH9sGA4Y9j<+?7DDncFQW^$HvB3Z03ysf~lhn-yDt zc+^M@>&1n9yt8|V`t`P}s1lJk1W6NGglWRA#tS`d1%HLnKPMLNO89N=p|-}pvz033 zbxLQsP$r?Up|8+WD0KH@l}U7GyM`zb?zkFgbBV|a(Ukakjt}d(>(j;d&i)9zcCg*% zZ(RdaI6-=8gY?p{ZL4C+f9#??n;R%22dfP(zLsjSk8qPnulOCpmTSd)Div=p3=#Do z{3KN(@-v~uLDE^n=Am_o_MV>ZZ2@<6Po{mqZFS3xEjLh&yoJI)VkgVg7}i_2sWh$^+74(mA2Z$st3^wuX+MsuSt1p;uyW-9q)*S0EMX z6WWBe(Kc=T4BaqQUnQJFgcMatoKL%))c_t;~uvGxo8e^ z_Q>V7WwUF$6Zy4R@~O82$soDR8o6M9zrDMV?ZR>&M{=)}G&ISe72=EA^VvcnpIVJ2 z*WE$CIC7!zw!K<}w;en3FZSF1-32UlGm?6@kh)K+kUG-&65p}9(ruI1_;qo9BNqLg zyXXhE3dv)ugyaxEbid@Tp3ERl{qei0T<=C%MQxT`k93aO|K@Igf&+i;&R+yl>!j3k zNhy0N6y!|nU_Z)XUj{$Q z@1+*S&=_vh!k_SMNKGmfHv)s)k7RF@vfFFLFEG;mf5c%ac6GdxEQy|&a z->#TrpPQ)&HUj zfC2B){Nrt7y;zH_k5DbVg7_I?ZPHImvxp!zm`TK{9Qz%h%ICH_hs^%vQ)S8^w)yg- zRF%jDBIoVZB%GssnoJ!;lbMfEO}y*mkFr^+GSVHVclxQY&$h8JY@lV|1G=0dYt|{! zSIVeM!yv!Q=I;YtuHXK^@^Pb;ReY`4}cQ;6lN-K@tIbDHOnehjp%FEI~ z3tNPNO7jx2Lywh+{t+m#UMewNDiN-2B!DTreG?dKa{r%!CYz)tGkV3(QCdZ$Yz1Ro z&iD(^O3D<^Ln;%kv zXa7MJ@R}v6_0lNe`nzH%+kb-1Ki}Pd>U}6jedwP+p>w1{Gh{>!r%=Xs)c&o{ZSP(T zgPi=ofI_=Th&PB>7B1uU>yyfKbfpWOL;f^N-U6DODu0?O(pKS;jej@6w#$--)Fk?E zpvku+MLFrWAvF>AHxAwFy#;AsCCetSCt{40MPE;3fod|5Pi&}*;A0DBzY*(f>KBp_;zsJ^p4QtD`y>YEh)$av5 z?I2xck^F(Op>?X0n#5=Mx9H6Be<(i-_-~yr?M(TPh5DMV=pPK$s<~?|Rr6?J#hv93 z*A`mI`nNs9zBrZh$KU&>wFWM-*O_$wEUK56ls{iW{(RvzYltuE@yGDfs|U)?WqwM} z`(tDoKQ?K9u^+Wj&A^yTBJ?ljU-mD;q<`mRd%q?9y9<5OYSnuDJgS!Wru@Oy$scTO zXvL-urda%>b^diHOViyct%~ox9;jF^vwp4gw(!5*wD<{$wfz(Tr49+S7?h4>#Z@Jj-NRR==c|@V^6QJ z>qz@&+c|j0do(&-ifO9LvBJZ4mYR%oakX8Ox4rP8)8o0*s2(1=sc)nw#zIUk%y`(PW~8G&ij{) zU@g+$L&u0I@!6@3S-*VUUS9{QX(MaTw}D>ok$N2_6XQsa$4Glljn7K@RZYRaJk2kp zhoRDOF9MY!auKpl{#5C(hMeZVbDHVDn6m%cvL${EU09t6j_nA;YJaoe?|d}Ljvl)C zd87?P`I}9TAHVJDf`QzquOXg&A5aIGFm$?}5ucyV`}e$^l=tsK4ph_rKpRd18a*K` zxV2WG7&=|g^a}{lKJ<&4k*W9TbAWnx%DCRyBGe0Qzxe~30oC)wkfiq4+UpBco#VEJ zt*C#&dvIm@NRRL5dfNPBwv)HtVVrKyf+6Q;@1epMikKPA3F#xHUr5jOuN{inp7zeT zFF)fS(oXfy_ht+Kzqm6AZ6XTeIEi&bD0Qht=%Fs?K{H83tfeBhYE_6Btjk5QOD1U~ z?TnpClit*WMMbO&E;j`gJb7?M(7K^iPlDF9H$@PXqM|5>elO|!uzlZ~^qEP#q)l3w zU;f|r-n{p|(ZiDN!RM}sK;EZloDLjJsrH&0LTa(#07g?gWJ(QwB^0T@N&p-}03E&K zA1%TFOv8Xymh|m6+Yv(9#4Q5$IVyWHz?JPRs2e1W2u1s{m!PE% zXeJ9zyO0of+iek-I3BPgAF$~rU2DR#;_E?7<^4h-GT>R z_!enZ5dCXb1w+0~4Zii8HfnZ?3iyFLqH^mw*mx{Xt+^;ifU%+Tq~6TlN{Y|tM-kfkpO38$i4&I|FBuDX6VDhXSR4qZkT zr^tGdY$Y1k?L?lIFA__EcF9=SZj}MvB>+92c$~~e^X4?nVflH)C?do6*XJ_8@|T2d zFJgEc3vk=IJhisNUfprk-h-Cgcx)DD%@-7YCDS^k8}2dy&T*MQ7_qd#WyHWhYu| zL%ZM>3D@oqBG*=K#|b94qv}RsXDTBemf+NU6mh7tQ>wc=vhp~JbYf7X3UZOgqv=#P zggQd4O2Kklom{CiUqmL);1JZfIoxnL)yMMd5>c1*Lygdt42(-1!K`x zTjE;#e-Ky~5UUle%LL z4t28NiXwt2g*CAX#}uyS(Co&WK8fJEj9e>_%T?p8)5Ebuo5b1AOeT=dqCXw2;4!GW zuh21PWsn@bs=AB-t3jF7+yaSp-@-_p_F@Wd+#k&lnSfJfla_<4C5lfOeZ+`5+|jy!w|(CCd7t;3 z=Y5aw$9~QEp<^1Oe%dxIPCbyQd&&n9)F7xuP=}x%K?8zD1Y;1KhF~m$(-DkAFdjh@ zf(ZyFB4|c13BhCppG44t;0y#)5KKjICW2`QrXy%YFatpwf|;kGaKw;QqTNL*HWwL9 zOK;k|=ei7Sxb&nV8cVc$NX-+t@1azHOWrW%WB*#Mhyyn>_2h(6aG9d>VEu@U~5u>aRYF5p(Va4kg>#J)5+giTvz4=$-ORA8^uZXmWgVZy*_Wvmf$8Hu<< zsKY|DT_MA3gypdeUeLm&;4nP4*{+<~D4dHJsJ7qpr1YAZ&GE@q3qoUkP^E}wR zJwYIWGH_^y+F;^<{+Vkczn9V<#FZG-$TDfUE_Tl3`Q+YR@uI+Ddrfv8d*Ls! z3VY2)?yi;u`?@%_VE2URIS@#*MKtWT+H)&Z!jdL?txx^o9kqAKjJ?q0m`%U-5VFza zNH(ewt1XV+msIXfk3gHL)h%JBeQ%Rv{Sg(TR37ZTpwEpcT$d!}S26mdjS@fU!KbCQ zc-6pwmeahuybZ`ngC|+pDnSix?N~z3gz1szdTYZFH)6P`MtS0P_}&np;pgm3uPLg~!@x*;AeJ#A6S_ z{ugh5%cjRR3i$Md#r9?xoOs0Il*y*!awTv6N>@}v-Mxv~*nAE+$ljB30W}?;5c9oc zJ|UrW*+6D}DgXUg+18BTUYjhNg6~X7{5_d_Y4SWjN-^Qy8`>GnRQLPjjS{=IS8u7c zX<$RgBXxl1j>@aJyu)LWA#8v_OO}DG8m9^vP1Emcj5tK=O+7lwAMz&Zl!!+sl&Q~( z@b3eQvX?I{oTsqiBlN7&EoxSOxpGJS_8Wi)`-8T^uzNeKl7ToEDGQ5L{LKwYGj{jb zv&cxBQln$yo^4n7i

- © 2019. All rights reserved. + © 2022. All rights reserved.

diff --git a/_site/README.md b/_site/README.md index 97c99a8a80..fe3eafea7d 100644 --- a/_site/README.md +++ b/_site/README.md @@ -1,10 +1,10 @@ # AIMA Exercises -Aima exercises is an interactive and collaborative platform for digitalizing the exercises of the book Artificial Intelligence: a modern approach by Stuart J. Russell and Peter Norvig.
+AIMA exercises is an interactive and collaborative platform for digitalizing the exercises of the book Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig.
Exercises for the book [*Artificial Intelligence: A Modern Approach.*](http://aima.cs.berkeley.edu/) The idea is that in the fourth edition of the book, exercises will be online only (they will not appear in the book). This site will showcase the exercises, and will be a platform for students and teachers to add new exercises.
-The present version of AIMA-Exercises uses Jekyll 3 and ruby 2.5. +The present version of AIMA-Exercises uses Jekyll 3 and Ruby 2.5. **To run the project locally**: 1. Install a full [Ruby development environment](https://jekyllrb.com/docs/installation/) 2. Install Jekyll and [bundler gems](https://jekyllrb.com/docs/ruby-101/#bundler) diff --git a/_site/advanced-planning-exercises/ex_1/index.html b/_site/advanced-planning-exercises/ex_1/index.html index b94ff2a46b..c0be88d3ec 100644 --- a/_site/advanced-planning-exercises/ex_1/index.html +++ b/_site/advanced-planning-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_10/index.html b/_site/advanced-planning-exercises/ex_10/index.html index bc197edac2..43af4eb3a0 100644 --- a/_site/advanced-planning-exercises/ex_10/index.html +++ b/_site/advanced-planning-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_11/index.html b/_site/advanced-planning-exercises/ex_11/index.html index de5f7d123a..fe1218e9d0 100644 --- a/_site/advanced-planning-exercises/ex_11/index.html +++ b/_site/advanced-planning-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_12/index.html b/_site/advanced-planning-exercises/ex_12/index.html index 0bf0ba947b..52862448b3 100644 --- a/_site/advanced-planning-exercises/ex_12/index.html +++ b/_site/advanced-planning-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_13/index.html b/_site/advanced-planning-exercises/ex_13/index.html index 4c522be482..ff0ed2b9ad 100644 --- a/_site/advanced-planning-exercises/ex_13/index.html +++ b/_site/advanced-planning-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_14/index.html b/_site/advanced-planning-exercises/ex_14/index.html index d5debced41..8eedfc52bc 100644 --- a/_site/advanced-planning-exercises/ex_14/index.html +++ b/_site/advanced-planning-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_15/index.html b/_site/advanced-planning-exercises/ex_15/index.html index 50a3afff4c..fd35df3efa 100644 --- a/_site/advanced-planning-exercises/ex_15/index.html +++ b/_site/advanced-planning-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_2/index.html b/_site/advanced-planning-exercises/ex_2/index.html index 8484321874..0a320b58da 100644 --- a/_site/advanced-planning-exercises/ex_2/index.html +++ b/_site/advanced-planning-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_3/index.html b/_site/advanced-planning-exercises/ex_3/index.html index e0a72580f4..a5558c7c6d 100644 --- a/_site/advanced-planning-exercises/ex_3/index.html +++ b/_site/advanced-planning-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_4/index.html b/_site/advanced-planning-exercises/ex_4/index.html index 968137c251..bbb982c5c0 100644 --- a/_site/advanced-planning-exercises/ex_4/index.html +++ b/_site/advanced-planning-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_5/index.html b/_site/advanced-planning-exercises/ex_5/index.html index ecb83889cd..b47850edc5 100644 --- a/_site/advanced-planning-exercises/ex_5/index.html +++ b/_site/advanced-planning-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_6/index.html b/_site/advanced-planning-exercises/ex_6/index.html index 0b9e351a9a..57f83b8241 100644 --- a/_site/advanced-planning-exercises/ex_6/index.html +++ b/_site/advanced-planning-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_7/index.html b/_site/advanced-planning-exercises/ex_7/index.html index 8ba7fe6993..5444621e75 100644 --- a/_site/advanced-planning-exercises/ex_7/index.html +++ b/_site/advanced-planning-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_8/index.html b/_site/advanced-planning-exercises/ex_8/index.html index 549ce7c9d3..903bfbfb12 100644 --- a/_site/advanced-planning-exercises/ex_8/index.html +++ b/_site/advanced-planning-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/ex_9/index.html b/_site/advanced-planning-exercises/ex_9/index.html index 2c8609295b..3677e921ad 100644 --- a/_site/advanced-planning-exercises/ex_9/index.html +++ b/_site/advanced-planning-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-planning-exercises/index.html b/_site/advanced-planning-exercises/index.html index b7380ac2d8..62bdaccf3c 100644 --- a/_site/advanced-planning-exercises/index.html +++ b/_site/advanced-planning-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_1/index.html b/_site/advanced-search-exercises/ex_1/index.html index 966c81eba4..4cf01a7d92 100644 --- a/_site/advanced-search-exercises/ex_1/index.html +++ b/_site/advanced-search-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_10/index.html b/_site/advanced-search-exercises/ex_10/index.html index 6417711226..5c8a5e73be 100644 --- a/_site/advanced-search-exercises/ex_10/index.html +++ b/_site/advanced-search-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_11/index.html b/_site/advanced-search-exercises/ex_11/index.html index a6436dc8fd..3993a4590a 100644 --- a/_site/advanced-search-exercises/ex_11/index.html +++ b/_site/advanced-search-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_12/index.html b/_site/advanced-search-exercises/ex_12/index.html index 7c9d740563..c2f269b2c9 100644 --- a/_site/advanced-search-exercises/ex_12/index.html +++ b/_site/advanced-search-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_13/index.html b/_site/advanced-search-exercises/ex_13/index.html index 13c368c4a3..8ff0746192 100644 --- a/_site/advanced-search-exercises/ex_13/index.html +++ b/_site/advanced-search-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_14/index.html b/_site/advanced-search-exercises/ex_14/index.html index 7b507317c0..67cc8cfdc6 100644 --- a/_site/advanced-search-exercises/ex_14/index.html +++ b/_site/advanced-search-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_15/index.html b/_site/advanced-search-exercises/ex_15/index.html index 2a8ed5e42f..bac5b86108 100644 --- a/_site/advanced-search-exercises/ex_15/index.html +++ b/_site/advanced-search-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_16/index.html b/_site/advanced-search-exercises/ex_16/index.html index a0577b9008..bc052a7b48 100644 --- a/_site/advanced-search-exercises/ex_16/index.html +++ b/_site/advanced-search-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_17/index.html b/_site/advanced-search-exercises/ex_17/index.html index 178a40e02d..80959ae480 100644 --- a/_site/advanced-search-exercises/ex_17/index.html +++ b/_site/advanced-search-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_2/index.html b/_site/advanced-search-exercises/ex_2/index.html index 1e7307717b..fdd6624221 100644 --- a/_site/advanced-search-exercises/ex_2/index.html +++ b/_site/advanced-search-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_3/index.html b/_site/advanced-search-exercises/ex_3/index.html index c5f69b1981..9481d75c21 100644 --- a/_site/advanced-search-exercises/ex_3/index.html +++ b/_site/advanced-search-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_4/index.html b/_site/advanced-search-exercises/ex_4/index.html index 21effe8ff7..fab5a96f7d 100644 --- a/_site/advanced-search-exercises/ex_4/index.html +++ b/_site/advanced-search-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_5/index.html b/_site/advanced-search-exercises/ex_5/index.html index 4888ebcaeb..5ab9a3f782 100644 --- a/_site/advanced-search-exercises/ex_5/index.html +++ b/_site/advanced-search-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_6/index.html b/_site/advanced-search-exercises/ex_6/index.html index 056490697b..4edeb5a0da 100644 --- a/_site/advanced-search-exercises/ex_6/index.html +++ b/_site/advanced-search-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_7/index.html b/_site/advanced-search-exercises/ex_7/index.html index fdba400218..435267e356 100644 --- a/_site/advanced-search-exercises/ex_7/index.html +++ b/_site/advanced-search-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_8/index.html b/_site/advanced-search-exercises/ex_8/index.html index c019223489..525fc7d938 100644 --- a/_site/advanced-search-exercises/ex_8/index.html +++ b/_site/advanced-search-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/ex_9/index.html b/_site/advanced-search-exercises/ex_9/index.html index 45a1f60734..1721810621 100644 --- a/_site/advanced-search-exercises/ex_9/index.html +++ b/_site/advanced-search-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/advanced-search-exercises/index.html b/_site/advanced-search-exercises/index.html index 73a1d78888..3b77fd8770 100644 --- a/_site/advanced-search-exercises/index.html +++ b/_site/advanced-search-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_1/index.html b/_site/agents-exercises/ex_1/index.html index ec51ba02b4..293fbfa7fb 100644 --- a/_site/agents-exercises/ex_1/index.html +++ b/_site/agents-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_10/index.html b/_site/agents-exercises/ex_10/index.html index 3157b0f56b..482ba88186 100644 --- a/_site/agents-exercises/ex_10/index.html +++ b/_site/agents-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_11/index.html b/_site/agents-exercises/ex_11/index.html index 3c5536cb27..62a924d899 100644 --- a/_site/agents-exercises/ex_11/index.html +++ b/_site/agents-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_12/index.html b/_site/agents-exercises/ex_12/index.html index 63dbfa6730..f99c0ba2b0 100644 --- a/_site/agents-exercises/ex_12/index.html +++ b/_site/agents-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_13/index.html b/_site/agents-exercises/ex_13/index.html index 6f80135639..fc52454965 100644 --- a/_site/agents-exercises/ex_13/index.html +++ b/_site/agents-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_14/index.html b/_site/agents-exercises/ex_14/index.html index 24c4522ebf..c6072e2a7b 100644 --- a/_site/agents-exercises/ex_14/index.html +++ b/_site/agents-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_15/index.html b/_site/agents-exercises/ex_15/index.html index 3a3590f77f..38be0d7385 100644 --- a/_site/agents-exercises/ex_15/index.html +++ b/_site/agents-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_16/index.html b/_site/agents-exercises/ex_16/index.html index bbc9f1f92e..7109baa273 100644 --- a/_site/agents-exercises/ex_16/index.html +++ b/_site/agents-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_2/index.html b/_site/agents-exercises/ex_2/index.html index e6e14603b7..8486cc081e 100644 --- a/_site/agents-exercises/ex_2/index.html +++ b/_site/agents-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_3/index.html b/_site/agents-exercises/ex_3/index.html index 534c01719f..ac05d7998c 100644 --- a/_site/agents-exercises/ex_3/index.html +++ b/_site/agents-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_4/index.html b/_site/agents-exercises/ex_4/index.html index b49d690e25..9949ef0615 100644 --- a/_site/agents-exercises/ex_4/index.html +++ b/_site/agents-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_5/index.html b/_site/agents-exercises/ex_5/index.html index d276a11769..e705b6bdc7 100644 --- a/_site/agents-exercises/ex_5/index.html +++ b/_site/agents-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_6/index.html b/_site/agents-exercises/ex_6/index.html index d36fbb02d3..4dbd9584e9 100644 --- a/_site/agents-exercises/ex_6/index.html +++ b/_site/agents-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_7/index.html b/_site/agents-exercises/ex_7/index.html index c3ed5457d3..69ffb07b2d 100644 --- a/_site/agents-exercises/ex_7/index.html +++ b/_site/agents-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_8/index.html b/_site/agents-exercises/ex_8/index.html index 8bb6ba932d..f73811fa4d 100644 --- a/_site/agents-exercises/ex_8/index.html +++ b/_site/agents-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/ex_9/index.html b/_site/agents-exercises/ex_9/index.html index f5eafc7010..6cb0b521f6 100644 --- a/_site/agents-exercises/ex_9/index.html +++ b/_site/agents-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/agents-exercises/index.html b/_site/agents-exercises/index.html index 705aed069c..d1dd313d48 100644 --- a/_site/agents-exercises/index.html +++ b/_site/agents-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/answersubmitted/index.html b/_site/answersubmitted/index.html index bb1123da31..b8e6c827e8 100644 --- a/_site/answersubmitted/index.html +++ b/_site/answersubmitted/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_1/index.html b/_site/bayes-nets-exercises/ex_1/index.html index 5ab18d3032..c27b82b6b1 100644 --- a/_site/bayes-nets-exercises/ex_1/index.html +++ b/_site/bayes-nets-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_10/index.html b/_site/bayes-nets-exercises/ex_10/index.html index 18635edd3f..5bb0d97ba3 100644 --- a/_site/bayes-nets-exercises/ex_10/index.html +++ b/_site/bayes-nets-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_11/index.html b/_site/bayes-nets-exercises/ex_11/index.html index 7bfc7028d5..d15e5df4fa 100644 --- a/_site/bayes-nets-exercises/ex_11/index.html +++ b/_site/bayes-nets-exercises/ex_11/index.html @@ -82,7 +82,7 @@ @@ -171,7 +171,7 @@

1. In a two-variable network, let $X_1$ be the parent of $X_2$, let $X_1$ have a Gaussian prior, and let - ${\textbf{P}}(X_2X_1)$ be a linear + ${\textbf{P}}(X_2$|$X_1)$ be a linear Gaussian distribution. Show that the joint distribution $P(X_1,X_2)$ is a multivariate Gaussian, and calculate its covariance matrix.
@@ -203,7 +203,7 @@

1. In a two-variable network, let $X_1$ be the parent of $X_2$, let $X_1$ have a Gaussian prior, and let - ${\textbf{P}}(X_2X_1)$ be a linear + ${\textbf{P}}(X_2$|$X_1)$ be a linear Gaussian distribution. Show that the joint distribution $P(X_1,X_2)$ is a multivariate Gaussian, and calculate its covariance matrix.
diff --git a/_site/bayes-nets-exercises/ex_12/index.html b/_site/bayes-nets-exercises/ex_12/index.html index 1257fd1829..96ed25c2bd 100644 --- a/_site/bayes-nets-exercises/ex_12/index.html +++ b/_site/bayes-nets-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_13/index.html b/_site/bayes-nets-exercises/ex_13/index.html index 3e05f5a726..37291878be 100644 --- a/_site/bayes-nets-exercises/ex_13/index.html +++ b/_site/bayes-nets-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_14/index.html b/_site/bayes-nets-exercises/ex_14/index.html index b718215fd6..9bd0275f56 100644 --- a/_site/bayes-nets-exercises/ex_14/index.html +++ b/_site/bayes-nets-exercises/ex_14/index.html @@ -82,7 +82,7 @@ @@ -180,7 +180,7 @@

2. Which is the best network? Explain.
3. Write out a conditional distribution for - ${\textbf{P}}(M_1N)$, for the case where + ${\textbf{P}}(M_1$|$N)$, for the case where $N\{1,2,3\}$ and $M_1\{0,1,2,3,4\}$. Each entry in the conditional distribution should be expressed as a function of the parameters $e$ and/or $f$.
@@ -227,7 +227,7 @@

2. Which is the best network? Explain.
3. Write out a conditional distribution for - ${\textbf{P}}(M_1N)$, for the case where + ${\textbf{P}}(M_1$|$N)$, for the case where $N\{1,2,3\}$ and $M_1\{0,1,2,3,4\}$. Each entry in the conditional distribution should be expressed as a function of the parameters $e$ and/or $f$.
diff --git a/_site/bayes-nets-exercises/ex_15/index.html b/_site/bayes-nets-exercises/ex_15/index.html index 6bf4e50fcb..ff507f46e7 100644 --- a/_site/bayes-nets-exercises/ex_15/index.html +++ b/_site/bayes-nets-exercises/ex_15/index.html @@ -82,7 +82,7 @@ @@ -173,7 +173,7 @@

in Exercise telescope-exercise. Using the enumeration algorithm (Figure enumeration-algorithm on page enumeration-algorithm), calculate the probability distribution -${\textbf{P}}(NM_12,M_22)$.
+${\textbf{P}}(N$|$M_12,M_22)$.
@@ -209,7 +209,7 @@

in Exercise telescope-exercise. Using the enumeration algorithm (Figure enumeration-algorithm on page enumeration-algorithm), calculate the probability distribution -${\textbf{P}}(NM_12,M_22)$.
+${\textbf{P}}(N$|$M_12,M_22)$.
diff --git a/_site/bayes-nets-exercises/ex_16/index.html b/_site/bayes-nets-exercises/ex_16/index.html index 505bd65dce..cdfe6484b8 100644 --- a/_site/bayes-nets-exercises/ex_16/index.html +++ b/_site/bayes-nets-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_17/index.html b/_site/bayes-nets-exercises/ex_17/index.html index 33e87a0efd..14f2169a9b 100644 --- a/_site/bayes-nets-exercises/ex_17/index.html +++ b/_site/bayes-nets-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_18/index.html b/_site/bayes-nets-exercises/ex_18/index.html index 9ceafa9c3e..0bc22636cf 100644 --- a/_site/bayes-nets-exercises/ex_18/index.html +++ b/_site/bayes-nets-exercises/ex_18/index.html @@ -82,7 +82,7 @@ @@ -171,7 +171,7 @@

1. Section exact-inference-section applies variable elimination to the query - $${\textbf{P}}({Burglary}{JohnCalls}{true},{MaryCalls}{true})\ .$$ + $${\textbf{P}}({Burglary}$|${JohnCalls}{true},{MaryCalls}{true})\ .$$ Perform the calculations indicated and check that the answer is correct.
@@ -182,7 +182,7 @@

of Boolean variables $X_1,\ldots, X_n$ where ${Parents}(X_i)\{X_{i-1}\}$ for $i2,\ldots,n$. What is the complexity of computing - ${\textbf{P}}(X_1X_n{true})$ using + ${\textbf{P}}(X_1$|$X_n{true})$ using enumeration? Using variable elimination?
4. Prove that the complexity of running variable elimination on a @@ -213,7 +213,7 @@

1. Section exact-inference-section applies variable elimination to the query - $${\textbf{P}}({Burglary}{JohnCalls}{true},{MaryCalls}{true})\ .$$ + $${\textbf{P}}({Burglary}$|${JohnCalls}{true},{MaryCalls}{true})\ .$$ Perform the calculations indicated and check that the answer is correct.
@@ -224,7 +224,7 @@

of Boolean variables $X_1,\ldots, X_n$ where ${Parents}(X_i)\{X_{i-1}\}$ for $i2,\ldots,n$. What is the complexity of computing - ${\textbf{P}}(X_1X_n{true})$ using + ${\textbf{P}}(X_1$|$X_n{true})$ using enumeration? Using variable elimination?
4. Prove that the complexity of running variable elimination on a diff --git a/_site/bayes-nets-exercises/ex_19/index.html b/_site/bayes-nets-exercises/ex_19/index.html index 14c54f9cac..e0bb2307e2 100644 --- a/_site/bayes-nets-exercises/ex_19/index.html +++ b/_site/bayes-nets-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_2/index.html b/_site/bayes-nets-exercises/ex_2/index.html index 9a879c7c4b..99c64ca37e 100644 --- a/_site/bayes-nets-exercises/ex_2/index.html +++ b/_site/bayes-nets-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_20/index.html b/_site/bayes-nets-exercises/ex_20/index.html index f4b7b344ad..336b47fbd7 100644 --- a/_site/bayes-nets-exercises/ex_20/index.html +++ b/_site/bayes-nets-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_21/index.html b/_site/bayes-nets-exercises/ex_21/index.html index d5cfb78a41..3bb697fd30 100644 --- a/_site/bayes-nets-exercises/ex_21/index.html +++ b/_site/bayes-nets-exercises/ex_21/index.html @@ -82,7 +82,7 @@ @@ -167,7 +167,7 @@

Consider the query -${\textbf{P}}({Rain}{Sprinkler}{true},{WetGrass}{true})$ +${\textbf{P}}({Rain}$|${Sprinkler}{true},{WetGrass}{true})$ in Figure rain-clustering-figure(a) (page rain-clustering-figure) and how Gibbs sampling can answer it.
@@ -207,7 +207,7 @@

Consider the query -${\textbf{P}}({Rain}{Sprinkler}{true},{WetGrass}{true})$ +${\textbf{P}}({Rain}$|${Sprinkler}{true},{WetGrass}{true})$ in Figure rain-clustering-figure(a) (page rain-clustering-figure) and how Gibbs sampling can answer it.
diff --git a/_site/bayes-nets-exercises/ex_22/index.html b/_site/bayes-nets-exercises/ex_22/index.html index a001f2c755..39d5d66906 100644 --- a/_site/bayes-nets-exercises/ex_22/index.html +++ b/_site/bayes-nets-exercises/ex_22/index.html @@ -82,7 +82,7 @@

diff --git a/_site/bayes-nets-exercises/ex_23/index.html b/_site/bayes-nets-exercises/ex_23/index.html index f18b5e1305..c251965900 100644 --- a/_site/bayes-nets-exercises/ex_23/index.html +++ b/_site/bayes-nets-exercises/ex_23/index.html @@ -82,7 +82,7 @@ @@ -169,11 +169,11 @@

The Metropolis--Hastings algorithm is a member of the MCMC family; as such, it is designed to generate samples $\textbf{x}$ (eventually) according to target probabilities $\pi(\textbf{x})$. (Typically we are interested in sampling from -$\pi(\textbf{x})P(\textbf{x}\textbf{e})$.) Like simulated annealing, +$\pi(\textbf{x})P(\textbf{x}$|$\textbf{e})$.) Like simulated annealing, Metropolis–Hastings operates in two stages. First, it samples a new -state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}\textbf{x})$, given the current state $\textbf{x}$. +state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}$|$\textbf{x})$, given the current state $\textbf{x}$. Then, it probabilistically accepts or rejects $\textbf{x'}$ according to the acceptance probability -$$\alpha(\textbf{x'}\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}\textbf{x})} \right)\ .$$ +$$\alpha(\textbf{x'}$|$\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}$|$\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}$|$\textbf{x})} \right)\ .$$ If the proposal is rejected, the state remains at $\textbf{x}$.
1. Consider an ordinary Gibbs sampling step for a specific variable @@ -206,11 +206,11 @@

The Metropolis--Hastings algorithm is a member of the MCMC family; as such, it is designed to generate samples $\textbf{x}$ (eventually) according to target probabilities $\pi(\textbf{x})$. (Typically we are interested in sampling from -$\pi(\textbf{x})P(\textbf{x}\textbf{e})$.) Like simulated annealing, +$\pi(\textbf{x})P(\textbf{x}$|$\textbf{e})$.) Like simulated annealing, Metropolis–Hastings operates in two stages. First, it samples a new -state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}\textbf{x})$, given the current state $\textbf{x}$. +state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}$|$\textbf{x})$, given the current state $\textbf{x}$. Then, it probabilistically accepts or rejects $\textbf{x'}$ according to the acceptance probability -$$\alpha(\textbf{x'}\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}\textbf{x})} \right)\ .$$ +$$\alpha(\textbf{x'}$|$\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}$|$\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}$|$\textbf{x})} \right)\ .$$ If the proposal is rejected, the state remains at $\textbf{x}$.
1. Consider an ordinary Gibbs sampling step for a specific variable diff --git a/_site/bayes-nets-exercises/ex_24/index.html b/_site/bayes-nets-exercises/ex_24/index.html index 7ea47fe947..a2411e01cd 100644 --- a/_site/bayes-nets-exercises/ex_24/index.html +++ b/_site/bayes-nets-exercises/ex_24/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_3/index.html b/_site/bayes-nets-exercises/ex_3/index.html index b391ac8446..6cab8b5817 100644 --- a/_site/bayes-nets-exercises/ex_3/index.html +++ b/_site/bayes-nets-exercises/ex_3/index.html @@ -82,7 +82,7 @@ @@ -169,30 +169,30 @@

Equation (parameter-joint-repn-equation on page parameter-joint-repn-equation defines the joint distribution represented by a Bayesian network in terms of the parameters -$\theta(X_i{Parents}(X_i))$. This exercise asks you to derive +$\theta(X_i$|${Parents}(X_i))$. This exercise asks you to derive the equivalence between the parameters and the conditional probabilities -${\textbf{ P}}(X_i{Parents}(X_i))$ from this definition.
+${\textbf{ P}}(X_i$|${Parents}(X_i))$ from this definition.
1. Consider a simple network $X\rightarrow Y\rightarrow Z$ with three Boolean variables. Use Equations (conditional-probability-equation and (marginalization-equation (pages conditional-probability-equation and marginalization-equation) - to express the conditional probability $P(zy)$ as the ratio of two sums, each over entries in the + to express the conditional probability $P(z$|$y)$ as the ratio of two sums, each over entries in the joint distribution ${\textbf{P}}(X,Y,Z)$.
2. Now use Equation (parameter-joint-repn-equation to write this expression in terms of the network parameters - $\theta(X)$, $\theta(YX)$, and $\theta(ZY)$.
+ $\theta(X)$, $\theta(Y$|$X)$, and $\theta(Z$|$Y)$.
3. Next, expand out the summations in your expression from part (b), writing out explicitly the terms for the true and false values of each summed variable. Assuming that all network parameters satisfy the constraint - $\sum_{x_i} \theta(x_i{parents}(X_i))1$, show - that the resulting expression reduces to $\theta(zy)$.
+ $\sum_{x_i} \theta(x_i$|${parents}(X_i))1$, show + that the resulting expression reduces to $\theta(z$|$y)$.
4. Generalize this derivation to show that - $\theta(X_i{Parents}(X_i)) = {\textbf{P}}(X_i{Parents}(X_i))$ + $\theta(X_i$|${Parents}(X_i)) = {\textbf{P}}(X_i$|${Parents}(X_i))$ for any Bayesian network.
@@ -217,30 +217,30 @@

Equation (parameter-joint-repn-equation on page parameter-joint-repn-equation defines the joint distribution represented by a Bayesian network in terms of the parameters -$\theta(X_i{Parents}(X_i))$. This exercise asks you to derive +$\theta(X_i$|${Parents}(X_i))$. This exercise asks you to derive the equivalence between the parameters and the conditional probabilities -${\textbf{ P}}(X_i{Parents}(X_i))$ from this definition.
+${\textbf{ P}}(X_i$|${Parents}(X_i))$ from this definition.
1. Consider a simple network $X\rightarrow Y\rightarrow Z$ with three Boolean variables. Use Equations (conditional-probability-equation and (marginalization-equation (pages conditional-probability-equation and marginalization-equation) - to express the conditional probability $P(zy)$ as the ratio of two sums, each over entries in the + to express the conditional probability $P(z$|$y)$ as the ratio of two sums, each over entries in the joint distribution ${\textbf{P}}(X,Y,Z)$.
2. Now use Equation (parameter-joint-repn-equation to write this expression in terms of the network parameters - $\theta(X)$, $\theta(YX)$, and $\theta(ZY)$.
+ $\theta(X)$, $\theta(Y$|$X)$, and $\theta(Z$|$Y)$.
3. Next, expand out the summations in your expression from part (b), writing out explicitly the terms for the true and false values of each summed variable. Assuming that all network parameters satisfy the constraint - $\sum_{x_i} \theta(x_i{parents}(X_i))1$, show - that the resulting expression reduces to $\theta(zy)$.
+ $\sum_{x_i} \theta(x_i$|${parents}(X_i))1$, show + that the resulting expression reduces to $\theta(z$|$y)$.
4. Generalize this derivation to show that - $\theta(X_i{Parents}(X_i)) = {\textbf{P}}(X_i{Parents}(X_i))$ + $\theta(X_i$|${Parents}(X_i)) = {\textbf{P}}(X_i$|${Parents}(X_i))$ for any Bayesian network.

diff --git a/_site/bayes-nets-exercises/ex_4/index.html b/_site/bayes-nets-exercises/ex_4/index.html index bb8e929a2a..e85ac3e9e1 100644 --- a/_site/bayes-nets-exercises/ex_4/index.html +++ b/_site/bayes-nets-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_5/index.html b/_site/bayes-nets-exercises/ex_5/index.html index dcf5ef94bb..10eb362652 100644 --- a/_site/bayes-nets-exercises/ex_5/index.html +++ b/_site/bayes-nets-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_6/index.html b/_site/bayes-nets-exercises/ex_6/index.html index e941cfc468..0ca1eb3ce8 100644 --- a/_site/bayes-nets-exercises/ex_6/index.html +++ b/_site/bayes-nets-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_7/index.html b/_site/bayes-nets-exercises/ex_7/index.html index f42590f5ab..f462d4bb27 100644 --- a/_site/bayes-nets-exercises/ex_7/index.html +++ b/_site/bayes-nets-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_8/index.html b/_site/bayes-nets-exercises/ex_8/index.html index 3e8a9ba9ec..63944a60a3 100644 --- a/_site/bayes-nets-exercises/ex_8/index.html +++ b/_site/bayes-nets-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/ex_9/index.html b/_site/bayes-nets-exercises/ex_9/index.html index 370e03db6e..face9a6977 100644 --- a/_site/bayes-nets-exercises/ex_9/index.html +++ b/_site/bayes-nets-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayes-nets-exercises/index.html b/_site/bayes-nets-exercises/index.html index bad7624be1..c26d7c033a 100644 --- a/_site/bayes-nets-exercises/index.html +++ b/_site/bayes-nets-exercises/index.html @@ -82,7 +82,7 @@ @@ -214,30 +214,30 @@

14. Probabilistic Reasoning

Equation (parameter-joint-repn-equation on page parameter-joint-repn-equation defines the joint distribution represented by a Bayesian network in terms of the parameters -$\theta(X_i{Parents}(X_i))$. This exercise asks you to derive +$\theta(X_i$|${Parents}(X_i))$. This exercise asks you to derive the equivalence between the parameters and the conditional probabilities -${\textbf{ P}}(X_i{Parents}(X_i))$ from this definition.
+${\textbf{ P}}(X_i$|${Parents}(X_i))$ from this definition.
1. Consider a simple network $X\rightarrow Y\rightarrow Z$ with three Boolean variables. Use Equations (conditional-probability-equation and (marginalization-equation (pages conditional-probability-equation and marginalization-equation) - to express the conditional probability $P(zy)$ as the ratio of two sums, each over entries in the + to express the conditional probability $P(z$|$y)$ as the ratio of two sums, each over entries in the joint distribution ${\textbf{P}}(X,Y,Z)$.
2. Now use Equation (parameter-joint-repn-equation to write this expression in terms of the network parameters - $\theta(X)$, $\theta(YX)$, and $\theta(ZY)$.
+ $\theta(X)$, $\theta(Y$|$X)$, and $\theta(Z$|$Y)$.
3. Next, expand out the summations in your expression from part (b), writing out explicitly the terms for the true and false values of each summed variable. Assuming that all network parameters satisfy the constraint - $\sum_{x_i} \theta(x_i{parents}(X_i))1$, show - that the resulting expression reduces to $\theta(zy)$.
+ $\sum_{x_i} \theta(x_i$|${parents}(X_i))1$, show + that the resulting expression reduces to $\theta(z$|$y)$.
4. Generalize this derivation to show that - $\theta(X_i{Parents}(X_i)) = {\textbf{P}}(X_i{Parents}(X_i))$ + $\theta(X_i$|${Parents}(X_i)) = {\textbf{P}}(X_i$|${Parents}(X_i))$ for any Bayesian network.

@@ -464,7 +464,7 @@

14. Probabilistic Reasoning

1. In a two-variable network, let $X_1$ be the parent of $X_2$, let $X_1$ have a Gaussian prior, and let - ${\textbf{P}}(X_2X_1)$ be a linear + ${\textbf{P}}(X_2$|$X_1)$ be a linear Gaussian distribution. Show that the joint distribution $P(X_1,X_2)$ is a multivariate Gaussian, and calculate its covariance matrix.
@@ -563,7 +563,7 @@

14. Probabilistic Reasoning

2. Which is the best network? Explain.
3. Write out a conditional distribution for - ${\textbf{P}}(M_1N)$, for the case where + ${\textbf{P}}(M_1$|$N)$, for the case where $N\{1,2,3\}$ and $M_1\{0,1,2,3,4\}$. Each entry in the conditional distribution should be expressed as a function of the parameters $e$ and/or $f$.
@@ -596,7 +596,7 @@

14. Probabilistic Reasoning

in Exercise telescope-exercise. Using the enumeration algorithm (Figure enumeration-algorithm on page enumeration-algorithm), calculate the probability distribution -${\textbf{P}}(NM_12,M_22)$.
+${\textbf{P}}(N$|$M_12,M_22)$.
@@ -707,7 +707,7 @@

14. Probabilistic Reasoning

1. Section exact-inference-section applies variable elimination to the query - $${\textbf{P}}({Burglary}{JohnCalls}{true},{MaryCalls}{true})\ .$$ + $${\textbf{P}}({Burglary}$|${JohnCalls}{true},{MaryCalls}{true})\ .$$ Perform the calculations indicated and check that the answer is correct.
@@ -718,7 +718,7 @@

14. Probabilistic Reasoning

of Boolean variables $X_1,\ldots, X_n$ where ${Parents}(X_i)\{X_{i-1}\}$ for $i2,\ldots,n$. What is the complexity of computing - ${\textbf{P}}(X_1X_n{true})$ using + ${\textbf{P}}(X_1$|$X_n{true})$ using enumeration? Using variable elimination?
4. Prove that the complexity of running variable elimination on a @@ -801,7 +801,7 @@

14. Probabilistic Reasoning

Consider the query -${\textbf{P}}({Rain}{Sprinkler}{true},{WetGrass}{true})$ +${\textbf{P}}({Rain}$|${Sprinkler}{true},{WetGrass}{true})$ in Figure rain-clustering-figure(a) (page rain-clustering-figure) and how Gibbs sampling can answer it.
@@ -866,11 +866,11 @@

14. Probabilistic Reasoning

The Metropolis--Hastings algorithm is a member of the MCMC family; as such, it is designed to generate samples $\textbf{x}$ (eventually) according to target probabilities $\pi(\textbf{x})$. (Typically we are interested in sampling from -$\pi(\textbf{x})P(\textbf{x}\textbf{e})$.) Like simulated annealing, +$\pi(\textbf{x})P(\textbf{x}$|$\textbf{e})$.) Like simulated annealing, Metropolis–Hastings operates in two stages. First, it samples a new -state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}\textbf{x})$, given the current state $\textbf{x}$. +state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}$|$\textbf{x})$, given the current state $\textbf{x}$. Then, it probabilistically accepts or rejects $\textbf{x'}$ according to the acceptance probability -$$\alpha(\textbf{x'}\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}\textbf{x})} \right)\ .$$ +$$\alpha(\textbf{x'}$|$\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}$|$\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}$|$\textbf{x})} \right)\ .$$ If the proposal is rejected, the state remains at $\textbf{x}$.
1. Consider an ordinary Gibbs sampling step for a specific variable diff --git a/_site/bayesian-learning-exercises/ex_1/index.html b/_site/bayesian-learning-exercises/ex_1/index.html index 8b825b5485..68a297d31b 100644 --- a/_site/bayesian-learning-exercises/ex_1/index.html +++ b/_site/bayesian-learning-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayesian-learning-exercises/ex_10/index.html b/_site/bayesian-learning-exercises/ex_10/index.html index 45da66b469..9ed8d50c7b 100644 --- a/_site/bayesian-learning-exercises/ex_10/index.html +++ b/_site/bayesian-learning-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayesian-learning-exercises/ex_11/index.html b/_site/bayesian-learning-exercises/ex_11/index.html index 77a8938234..1009d95c1c 100644 --- a/_site/bayesian-learning-exercises/ex_11/index.html +++ b/_site/bayesian-learning-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayesian-learning-exercises/ex_2/index.html b/_site/bayesian-learning-exercises/ex_2/index.html index c3e1bd1ece..f75d9d778d 100644 --- a/_site/bayesian-learning-exercises/ex_2/index.html +++ b/_site/bayesian-learning-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayesian-learning-exercises/ex_3/index.html b/_site/bayesian-learning-exercises/ex_3/index.html index 37e4f47513..f4ad0e2ea8 100644 --- a/_site/bayesian-learning-exercises/ex_3/index.html +++ b/_site/bayesian-learning-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayesian-learning-exercises/ex_4/index.html b/_site/bayesian-learning-exercises/ex_4/index.html index bb8d6743c8..234781b68d 100644 --- a/_site/bayesian-learning-exercises/ex_4/index.html +++ b/_site/bayesian-learning-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayesian-learning-exercises/ex_5/index.html b/_site/bayesian-learning-exercises/ex_5/index.html index c89b5d9af4..fd410607ca 100644 --- a/_site/bayesian-learning-exercises/ex_5/index.html +++ b/_site/bayesian-learning-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayesian-learning-exercises/ex_6/index.html b/_site/bayesian-learning-exercises/ex_6/index.html index d5112839d8..92e8b2d444 100644 --- a/_site/bayesian-learning-exercises/ex_6/index.html +++ b/_site/bayesian-learning-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayesian-learning-exercises/ex_7/index.html b/_site/bayesian-learning-exercises/ex_7/index.html index 55f24638ef..47c424521b 100644 --- a/_site/bayesian-learning-exercises/ex_7/index.html +++ b/_site/bayesian-learning-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayesian-learning-exercises/ex_8/index.html b/_site/bayesian-learning-exercises/ex_8/index.html index 0665b32c22..985d0d1eb6 100644 --- a/_site/bayesian-learning-exercises/ex_8/index.html +++ b/_site/bayesian-learning-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayesian-learning-exercises/ex_9/index.html b/_site/bayesian-learning-exercises/ex_9/index.html index c913eb9a5c..6d46c815b4 100644 --- a/_site/bayesian-learning-exercises/ex_9/index.html +++ b/_site/bayesian-learning-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bayesian-learning-exercises/index.html b/_site/bayesian-learning-exercises/index.html index ea16c6ddb6..08ed00c06a 100644 --- a/_site/bayesian-learning-exercises/index.html +++ b/_site/bayesian-learning-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/bookmarks/index.html b/_site/bookmarks/index.html index 88447aab37..83c88337f4 100644 --- a/_site/bookmarks/index.html +++ b/_site/bookmarks/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_1/index.html b/_site/complex-decisions-exercises/ex_1/index.html index 8e213cdbe1..df4f88fa73 100644 --- a/_site/complex-decisions-exercises/ex_1/index.html +++ b/_site/complex-decisions-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_10/index.html b/_site/complex-decisions-exercises/ex_10/index.html index ef0386f54c..a3d98e8158 100644 --- a/_site/complex-decisions-exercises/ex_10/index.html +++ b/_site/complex-decisions-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_11/index.html b/_site/complex-decisions-exercises/ex_11/index.html index f6360aa05e..553b1c6caf 100644 --- a/_site/complex-decisions-exercises/ex_11/index.html +++ b/_site/complex-decisions-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_12/index.html b/_site/complex-decisions-exercises/ex_12/index.html index a56a81efbc..2720e934d3 100644 --- a/_site/complex-decisions-exercises/ex_12/index.html +++ b/_site/complex-decisions-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_13/index.html b/_site/complex-decisions-exercises/ex_13/index.html index 1d1027d15f..8dfb16d555 100644 --- a/_site/complex-decisions-exercises/ex_13/index.html +++ b/_site/complex-decisions-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_14/index.html b/_site/complex-decisions-exercises/ex_14/index.html index 6eb61a7fdd..61c45f0fdc 100644 --- a/_site/complex-decisions-exercises/ex_14/index.html +++ b/_site/complex-decisions-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_15/index.html b/_site/complex-decisions-exercises/ex_15/index.html index 39f55ef438..1f0793b942 100644 --- a/_site/complex-decisions-exercises/ex_15/index.html +++ b/_site/complex-decisions-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_16/index.html b/_site/complex-decisions-exercises/ex_16/index.html index 7f67cd2ade..cd0e577dd5 100644 --- a/_site/complex-decisions-exercises/ex_16/index.html +++ b/_site/complex-decisions-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_17/index.html b/_site/complex-decisions-exercises/ex_17/index.html index 6ef7c652dc..0aab784ddb 100644 --- a/_site/complex-decisions-exercises/ex_17/index.html +++ b/_site/complex-decisions-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_18/index.html b/_site/complex-decisions-exercises/ex_18/index.html index 9b7e9fe38d..ca74cf373e 100644 --- a/_site/complex-decisions-exercises/ex_18/index.html +++ b/_site/complex-decisions-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_19/index.html b/_site/complex-decisions-exercises/ex_19/index.html index d153134c41..e717055015 100644 --- a/_site/complex-decisions-exercises/ex_19/index.html +++ b/_site/complex-decisions-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_2/index.html b/_site/complex-decisions-exercises/ex_2/index.html index faff806d21..df2f8151c1 100644 --- a/_site/complex-decisions-exercises/ex_2/index.html +++ b/_site/complex-decisions-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_20/index.html b/_site/complex-decisions-exercises/ex_20/index.html index 280e3f8f30..805d6cd1e6 100644 --- a/_site/complex-decisions-exercises/ex_20/index.html +++ b/_site/complex-decisions-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_21/index.html b/_site/complex-decisions-exercises/ex_21/index.html index 07a7c00646..346c54bfba 100644 --- a/_site/complex-decisions-exercises/ex_21/index.html +++ b/_site/complex-decisions-exercises/ex_21/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_22/index.html b/_site/complex-decisions-exercises/ex_22/index.html index b5697fa134..b7190332ed 100644 --- a/_site/complex-decisions-exercises/ex_22/index.html +++ b/_site/complex-decisions-exercises/ex_22/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_23/index.html b/_site/complex-decisions-exercises/ex_23/index.html index 62b172e3ec..2f5e5d70f0 100644 --- a/_site/complex-decisions-exercises/ex_23/index.html +++ b/_site/complex-decisions-exercises/ex_23/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_24/index.html b/_site/complex-decisions-exercises/ex_24/index.html index ba82e788b5..9c8d8a4cbb 100644 --- a/_site/complex-decisions-exercises/ex_24/index.html +++ b/_site/complex-decisions-exercises/ex_24/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_25/index.html b/_site/complex-decisions-exercises/ex_25/index.html index ceb986490a..5bd49d4d89 100644 --- a/_site/complex-decisions-exercises/ex_25/index.html +++ b/_site/complex-decisions-exercises/ex_25/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_3/index.html b/_site/complex-decisions-exercises/ex_3/index.html index 117b775a45..dd3cc65cd6 100644 --- a/_site/complex-decisions-exercises/ex_3/index.html +++ b/_site/complex-decisions-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_4/index.html b/_site/complex-decisions-exercises/ex_4/index.html index b406938e40..eb94fa49f2 100644 --- a/_site/complex-decisions-exercises/ex_4/index.html +++ b/_site/complex-decisions-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_5/index.html b/_site/complex-decisions-exercises/ex_5/index.html index f5faba5eab..375b3776cf 100644 --- a/_site/complex-decisions-exercises/ex_5/index.html +++ b/_site/complex-decisions-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_6/index.html b/_site/complex-decisions-exercises/ex_6/index.html index 5d64d7a9dc..ccd39cc447 100644 --- a/_site/complex-decisions-exercises/ex_6/index.html +++ b/_site/complex-decisions-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_7/index.html b/_site/complex-decisions-exercises/ex_7/index.html index bb639a456d..647ee2c06a 100644 --- a/_site/complex-decisions-exercises/ex_7/index.html +++ b/_site/complex-decisions-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_8/index.html b/_site/complex-decisions-exercises/ex_8/index.html index a923a0a012..e0be4ab919 100644 --- a/_site/complex-decisions-exercises/ex_8/index.html +++ b/_site/complex-decisions-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/ex_9/index.html b/_site/complex-decisions-exercises/ex_9/index.html index 54102fa2cb..f8dd85eecf 100644 --- a/_site/complex-decisions-exercises/ex_9/index.html +++ b/_site/complex-decisions-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/complex-decisions-exercises/index.html b/_site/complex-decisions-exercises/index.html index 4afc3e5fad..7032cf5fe7 100644 --- a/_site/complex-decisions-exercises/index.html +++ b/_site/complex-decisions-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_1/index.html b/_site/concept-learning-exercises/ex_1/index.html index 82b6fe6851..07fefb2f57 100644 --- a/_site/concept-learning-exercises/ex_1/index.html +++ b/_site/concept-learning-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_10/index.html b/_site/concept-learning-exercises/ex_10/index.html index b773e04381..91f8723e9b 100644 --- a/_site/concept-learning-exercises/ex_10/index.html +++ b/_site/concept-learning-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_11/index.html b/_site/concept-learning-exercises/ex_11/index.html index 3d06a6cd9d..1ed7e8a3d2 100644 --- a/_site/concept-learning-exercises/ex_11/index.html +++ b/_site/concept-learning-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_12/index.html b/_site/concept-learning-exercises/ex_12/index.html index 4e444afdd9..5c5cbec4fd 100644 --- a/_site/concept-learning-exercises/ex_12/index.html +++ b/_site/concept-learning-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_13/index.html b/_site/concept-learning-exercises/ex_13/index.html index 4f9ddb1f09..869b8f601e 100644 --- a/_site/concept-learning-exercises/ex_13/index.html +++ b/_site/concept-learning-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_14/index.html b/_site/concept-learning-exercises/ex_14/index.html index 8fb83f0963..2a8f10479e 100644 --- a/_site/concept-learning-exercises/ex_14/index.html +++ b/_site/concept-learning-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_15/index.html b/_site/concept-learning-exercises/ex_15/index.html index ad13ab39e4..84848cc4ab 100644 --- a/_site/concept-learning-exercises/ex_15/index.html +++ b/_site/concept-learning-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_16/index.html b/_site/concept-learning-exercises/ex_16/index.html index 79ea893cc4..c505650722 100644 --- a/_site/concept-learning-exercises/ex_16/index.html +++ b/_site/concept-learning-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_17/index.html b/_site/concept-learning-exercises/ex_17/index.html index 6f6a25ea28..b0bd683f41 100644 --- a/_site/concept-learning-exercises/ex_17/index.html +++ b/_site/concept-learning-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_18/index.html b/_site/concept-learning-exercises/ex_18/index.html index cb4b1628c4..21d8b019bd 100644 --- a/_site/concept-learning-exercises/ex_18/index.html +++ b/_site/concept-learning-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_19/index.html b/_site/concept-learning-exercises/ex_19/index.html index 6d5a232e0c..0f9ee5b67a 100644 --- a/_site/concept-learning-exercises/ex_19/index.html +++ b/_site/concept-learning-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_2/index.html b/_site/concept-learning-exercises/ex_2/index.html index 6bccd008ba..51abf7e51a 100644 --- a/_site/concept-learning-exercises/ex_2/index.html +++ b/_site/concept-learning-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_20/index.html b/_site/concept-learning-exercises/ex_20/index.html index efceb95fa9..3da5920bb8 100644 --- a/_site/concept-learning-exercises/ex_20/index.html +++ b/_site/concept-learning-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_21/index.html b/_site/concept-learning-exercises/ex_21/index.html index 896dc1de90..7048211d96 100644 --- a/_site/concept-learning-exercises/ex_21/index.html +++ b/_site/concept-learning-exercises/ex_21/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_22/index.html b/_site/concept-learning-exercises/ex_22/index.html index 2fa1df44ab..0c7bba78c7 100644 --- a/_site/concept-learning-exercises/ex_22/index.html +++ b/_site/concept-learning-exercises/ex_22/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_23/index.html b/_site/concept-learning-exercises/ex_23/index.html index d5e1110107..9939546f80 100644 --- a/_site/concept-learning-exercises/ex_23/index.html +++ b/_site/concept-learning-exercises/ex_23/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_24/index.html b/_site/concept-learning-exercises/ex_24/index.html index 78f2af2106..3327b5fade 100644 --- a/_site/concept-learning-exercises/ex_24/index.html +++ b/_site/concept-learning-exercises/ex_24/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_25/index.html b/_site/concept-learning-exercises/ex_25/index.html index 4aca7ef0ec..48b4a98e5a 100644 --- a/_site/concept-learning-exercises/ex_25/index.html +++ b/_site/concept-learning-exercises/ex_25/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_26/index.html b/_site/concept-learning-exercises/ex_26/index.html index f62a6f4f4a..ece0cad2e9 100644 --- a/_site/concept-learning-exercises/ex_26/index.html +++ b/_site/concept-learning-exercises/ex_26/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_27/index.html b/_site/concept-learning-exercises/ex_27/index.html index 0b9ddb5e06..5921dc3a7a 100644 --- a/_site/concept-learning-exercises/ex_27/index.html +++ b/_site/concept-learning-exercises/ex_27/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_28/index.html b/_site/concept-learning-exercises/ex_28/index.html index b87d2c6caa..eb52957e22 100644 --- a/_site/concept-learning-exercises/ex_28/index.html +++ b/_site/concept-learning-exercises/ex_28/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_29/index.html b/_site/concept-learning-exercises/ex_29/index.html index 52ee4be7d8..d4c9db3df3 100644 --- a/_site/concept-learning-exercises/ex_29/index.html +++ b/_site/concept-learning-exercises/ex_29/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_3/index.html b/_site/concept-learning-exercises/ex_3/index.html index 39e9f59c72..0ca80e3122 100644 --- a/_site/concept-learning-exercises/ex_3/index.html +++ b/_site/concept-learning-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_30/index.html b/_site/concept-learning-exercises/ex_30/index.html index 5705f37009..61d83ff506 100644 --- a/_site/concept-learning-exercises/ex_30/index.html +++ b/_site/concept-learning-exercises/ex_30/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_31/index.html b/_site/concept-learning-exercises/ex_31/index.html index 7831b3af8c..5a055e2bd7 100644 --- a/_site/concept-learning-exercises/ex_31/index.html +++ b/_site/concept-learning-exercises/ex_31/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_32/index.html b/_site/concept-learning-exercises/ex_32/index.html index b6ed223bd0..880093b87f 100644 --- a/_site/concept-learning-exercises/ex_32/index.html +++ b/_site/concept-learning-exercises/ex_32/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_33/index.html b/_site/concept-learning-exercises/ex_33/index.html index eeb62afced..fbbfa5e040 100644 --- a/_site/concept-learning-exercises/ex_33/index.html +++ b/_site/concept-learning-exercises/ex_33/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_4/index.html b/_site/concept-learning-exercises/ex_4/index.html index 3b1ead8e4e..5e8c7aeb15 100644 --- a/_site/concept-learning-exercises/ex_4/index.html +++ b/_site/concept-learning-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_5/index.html b/_site/concept-learning-exercises/ex_5/index.html index fca2f222f3..da122a41a3 100644 --- a/_site/concept-learning-exercises/ex_5/index.html +++ b/_site/concept-learning-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_6/index.html b/_site/concept-learning-exercises/ex_6/index.html index d827d26951..57faec980a 100644 --- a/_site/concept-learning-exercises/ex_6/index.html +++ b/_site/concept-learning-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_7/index.html b/_site/concept-learning-exercises/ex_7/index.html index bb8ad7905e..4650de1a3b 100644 --- a/_site/concept-learning-exercises/ex_7/index.html +++ b/_site/concept-learning-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_8/index.html b/_site/concept-learning-exercises/ex_8/index.html index 21f3adc1d7..a494de3f09 100644 --- a/_site/concept-learning-exercises/ex_8/index.html +++ b/_site/concept-learning-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/ex_9/index.html b/_site/concept-learning-exercises/ex_9/index.html index bd16f23056..dfb58f4711 100644 --- a/_site/concept-learning-exercises/ex_9/index.html +++ b/_site/concept-learning-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/concept-learning-exercises/index.html b/_site/concept-learning-exercises/index.html index 90206d1642..c63e4ee905 100644 --- a/_site/concept-learning-exercises/index.html +++ b/_site/concept-learning-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_1/index.html b/_site/csp-exercises/ex_1/index.html index 21fb56413a..7a40ae77ec 100644 --- a/_site/csp-exercises/ex_1/index.html +++ b/_site/csp-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_10/index.html b/_site/csp-exercises/ex_10/index.html index df5f34fda0..bd85de9248 100644 --- a/_site/csp-exercises/ex_10/index.html +++ b/_site/csp-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_11/index.html b/_site/csp-exercises/ex_11/index.html index 59e928dfe9..11d5aa05d7 100644 --- a/_site/csp-exercises/ex_11/index.html +++ b/_site/csp-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_12/index.html b/_site/csp-exercises/ex_12/index.html index 1967128530..e14aa1fd8f 100644 --- a/_site/csp-exercises/ex_12/index.html +++ b/_site/csp-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_13/index.html b/_site/csp-exercises/ex_13/index.html index d651f8a6d7..dc4f916570 100644 --- a/_site/csp-exercises/ex_13/index.html +++ b/_site/csp-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_14/index.html b/_site/csp-exercises/ex_14/index.html index 41b0fea376..7cc2b44a74 100644 --- a/_site/csp-exercises/ex_14/index.html +++ b/_site/csp-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_15/index.html b/_site/csp-exercises/ex_15/index.html index 3169d44100..f6b2ee7719 100644 --- a/_site/csp-exercises/ex_15/index.html +++ b/_site/csp-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_16/index.html b/_site/csp-exercises/ex_16/index.html index b4abde657e..635bd6713a 100644 --- a/_site/csp-exercises/ex_16/index.html +++ b/_site/csp-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_17/index.html b/_site/csp-exercises/ex_17/index.html index 7f53a703ee..9eed7d5d56 100644 --- a/_site/csp-exercises/ex_17/index.html +++ b/_site/csp-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_18/index.html b/_site/csp-exercises/ex_18/index.html index 87204391ac..c257005353 100644 --- a/_site/csp-exercises/ex_18/index.html +++ b/_site/csp-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_19/index.html b/_site/csp-exercises/ex_19/index.html index 8560dd5a5a..e966b8ce0f 100644 --- a/_site/csp-exercises/ex_19/index.html +++ b/_site/csp-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_2/index.html b/_site/csp-exercises/ex_2/index.html index de295f9f77..77b2bee435 100644 --- a/_site/csp-exercises/ex_2/index.html +++ b/_site/csp-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_20/index.html b/_site/csp-exercises/ex_20/index.html index cdcf856c49..6f887d11af 100644 --- a/_site/csp-exercises/ex_20/index.html +++ b/_site/csp-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_3/index.html b/_site/csp-exercises/ex_3/index.html index a8845d6f24..d97eb6732f 100644 --- a/_site/csp-exercises/ex_3/index.html +++ b/_site/csp-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_4/index.html b/_site/csp-exercises/ex_4/index.html index edf140a6e3..becfdb4ea1 100644 --- a/_site/csp-exercises/ex_4/index.html +++ b/_site/csp-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_5/index.html b/_site/csp-exercises/ex_5/index.html index bc5ce12c99..bf13ce9921 100644 --- a/_site/csp-exercises/ex_5/index.html +++ b/_site/csp-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_6/index.html b/_site/csp-exercises/ex_6/index.html index a0ab5e1c24..d457dde8ca 100644 --- a/_site/csp-exercises/ex_6/index.html +++ b/_site/csp-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_7/index.html b/_site/csp-exercises/ex_7/index.html index 978ac88602..2051d7dc65 100644 --- a/_site/csp-exercises/ex_7/index.html +++ b/_site/csp-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_8/index.html b/_site/csp-exercises/ex_8/index.html index 802e1a32fc..f27aa47a8a 100644 --- a/_site/csp-exercises/ex_8/index.html +++ b/_site/csp-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/ex_9/index.html b/_site/csp-exercises/ex_9/index.html index a2ba23997d..54f52e3fc0 100644 --- a/_site/csp-exercises/ex_9/index.html +++ b/_site/csp-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/csp-exercises/index.html b/_site/csp-exercises/index.html index 3d362342a0..1828a81ded 100644 --- a/_site/csp-exercises/index.html +++ b/_site/csp-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_1/index.html b/_site/dbn-exercises/ex_1/index.html index 35ae87818a..f1dd085368 100644 --- a/_site/dbn-exercises/ex_1/index.html +++ b/_site/dbn-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_10/index.html b/_site/dbn-exercises/ex_10/index.html index 22406517dd..363ae53342 100644 --- a/_site/dbn-exercises/ex_10/index.html +++ b/_site/dbn-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_11/index.html b/_site/dbn-exercises/ex_11/index.html index cbbef3d901..2ea7b020c1 100644 --- a/_site/dbn-exercises/ex_11/index.html +++ b/_site/dbn-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_12/index.html b/_site/dbn-exercises/ex_12/index.html index 928ac25f27..5a362d1279 100644 --- a/_site/dbn-exercises/ex_12/index.html +++ b/_site/dbn-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_13/index.html b/_site/dbn-exercises/ex_13/index.html index cbf6e9c02f..c91ccf68fc 100644 --- a/_site/dbn-exercises/ex_13/index.html +++ b/_site/dbn-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_14/index.html b/_site/dbn-exercises/ex_14/index.html index 7a6faf32c5..e0608b37f2 100644 --- a/_site/dbn-exercises/ex_14/index.html +++ b/_site/dbn-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_15/index.html b/_site/dbn-exercises/ex_15/index.html index f414f8831b..c2ef1acd38 100644 --- a/_site/dbn-exercises/ex_15/index.html +++ b/_site/dbn-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_16/index.html b/_site/dbn-exercises/ex_16/index.html index 53a12c539b..69e269914a 100644 --- a/_site/dbn-exercises/ex_16/index.html +++ b/_site/dbn-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_17/index.html b/_site/dbn-exercises/ex_17/index.html index 9c11984733..f55bdbb4c8 100644 --- a/_site/dbn-exercises/ex_17/index.html +++ b/_site/dbn-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_18/index.html b/_site/dbn-exercises/ex_18/index.html index 7fc5e8118b..4516873242 100644 --- a/_site/dbn-exercises/ex_18/index.html +++ b/_site/dbn-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_19/index.html b/_site/dbn-exercises/ex_19/index.html index b585701b99..1b8b218fa4 100644 --- a/_site/dbn-exercises/ex_19/index.html +++ b/_site/dbn-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_2/index.html b/_site/dbn-exercises/ex_2/index.html index 2de83ff5ce..03bf729269 100644 --- a/_site/dbn-exercises/ex_2/index.html +++ b/_site/dbn-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_20/index.html b/_site/dbn-exercises/ex_20/index.html index fe13c77adf..59210b3a5b 100644 --- a/_site/dbn-exercises/ex_20/index.html +++ b/_site/dbn-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_3/index.html b/_site/dbn-exercises/ex_3/index.html index c4558b7ce2..1915f6d9f1 100644 --- a/_site/dbn-exercises/ex_3/index.html +++ b/_site/dbn-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_4/index.html b/_site/dbn-exercises/ex_4/index.html index 1bb711474e..3689c8c5be 100644 --- a/_site/dbn-exercises/ex_4/index.html +++ b/_site/dbn-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_5/index.html b/_site/dbn-exercises/ex_5/index.html index bef060872e..0ad03e50e6 100644 --- a/_site/dbn-exercises/ex_5/index.html +++ b/_site/dbn-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_6/index.html b/_site/dbn-exercises/ex_6/index.html index 7f25028777..afe6af2295 100644 --- a/_site/dbn-exercises/ex_6/index.html +++ b/_site/dbn-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_7/index.html b/_site/dbn-exercises/ex_7/index.html index 35b5e5fdac..a26906ef65 100644 --- a/_site/dbn-exercises/ex_7/index.html +++ b/_site/dbn-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_8/index.html b/_site/dbn-exercises/ex_8/index.html index efe800d95b..0035f38c9b 100644 --- a/_site/dbn-exercises/ex_8/index.html +++ b/_site/dbn-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/ex_9/index.html b/_site/dbn-exercises/ex_9/index.html index b066a057aa..3de16491a9 100644 --- a/_site/dbn-exercises/ex_9/index.html +++ b/_site/dbn-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/dbn-exercises/index.html b/_site/dbn-exercises/index.html index 83c45b5309..0b7fe82468 100644 --- a/_site/dbn-exercises/index.html +++ b/_site/dbn-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_1/index.html b/_site/decision-theory-exercises/ex_1/index.html index 7a4b5ef426..ea57d93d3e 100644 --- a/_site/decision-theory-exercises/ex_1/index.html +++ b/_site/decision-theory-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_10/index.html b/_site/decision-theory-exercises/ex_10/index.html index 45144048d0..5748030ead 100644 --- a/_site/decision-theory-exercises/ex_10/index.html +++ b/_site/decision-theory-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_11/index.html b/_site/decision-theory-exercises/ex_11/index.html index 74f28263e2..9ae4940a72 100644 --- a/_site/decision-theory-exercises/ex_11/index.html +++ b/_site/decision-theory-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_12/index.html b/_site/decision-theory-exercises/ex_12/index.html index 2b38e71c20..f6660debdd 100644 --- a/_site/decision-theory-exercises/ex_12/index.html +++ b/_site/decision-theory-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_13/index.html b/_site/decision-theory-exercises/ex_13/index.html index 94c5c28c81..8cdb710a1f 100644 --- a/_site/decision-theory-exercises/ex_13/index.html +++ b/_site/decision-theory-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_14/index.html b/_site/decision-theory-exercises/ex_14/index.html index 0068913b3a..ea45030cd3 100644 --- a/_site/decision-theory-exercises/ex_14/index.html +++ b/_site/decision-theory-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_15/index.html b/_site/decision-theory-exercises/ex_15/index.html index 4815e348d0..acc6085cf0 100644 --- a/_site/decision-theory-exercises/ex_15/index.html +++ b/_site/decision-theory-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_16/index.html b/_site/decision-theory-exercises/ex_16/index.html index de84a9c1b2..846e3fc99b 100644 --- a/_site/decision-theory-exercises/ex_16/index.html +++ b/_site/decision-theory-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_17/index.html b/_site/decision-theory-exercises/ex_17/index.html index 6c596563af..d10bc7e20a 100644 --- a/_site/decision-theory-exercises/ex_17/index.html +++ b/_site/decision-theory-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_18/index.html b/_site/decision-theory-exercises/ex_18/index.html index 165024b88a..842fb4476e 100644 --- a/_site/decision-theory-exercises/ex_18/index.html +++ b/_site/decision-theory-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_19/index.html b/_site/decision-theory-exercises/ex_19/index.html index 5a9f8c012e..d51cf6105d 100644 --- a/_site/decision-theory-exercises/ex_19/index.html +++ b/_site/decision-theory-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_2/index.html b/_site/decision-theory-exercises/ex_2/index.html index 763d3cd458..33c3f6a128 100644 --- a/_site/decision-theory-exercises/ex_2/index.html +++ b/_site/decision-theory-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_20/index.html b/_site/decision-theory-exercises/ex_20/index.html index c760b6ca8f..f769276c15 100644 --- a/_site/decision-theory-exercises/ex_20/index.html +++ b/_site/decision-theory-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_21/index.html b/_site/decision-theory-exercises/ex_21/index.html index 5f478c1de3..3e77e6423b 100644 --- a/_site/decision-theory-exercises/ex_21/index.html +++ b/_site/decision-theory-exercises/ex_21/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_22/index.html b/_site/decision-theory-exercises/ex_22/index.html index 16ad4c454b..aee406e59b 100644 --- a/_site/decision-theory-exercises/ex_22/index.html +++ b/_site/decision-theory-exercises/ex_22/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_23/index.html b/_site/decision-theory-exercises/ex_23/index.html index 5ce36f701d..d050be99bd 100644 --- a/_site/decision-theory-exercises/ex_23/index.html +++ b/_site/decision-theory-exercises/ex_23/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_3/index.html b/_site/decision-theory-exercises/ex_3/index.html index 46d4179c48..89305dc6ce 100644 --- a/_site/decision-theory-exercises/ex_3/index.html +++ b/_site/decision-theory-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_4/index.html b/_site/decision-theory-exercises/ex_4/index.html index 8a3e57034e..0c8f3faffa 100644 --- a/_site/decision-theory-exercises/ex_4/index.html +++ b/_site/decision-theory-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_5/index.html b/_site/decision-theory-exercises/ex_5/index.html index 324fea5ea9..de8b9dae9c 100644 --- a/_site/decision-theory-exercises/ex_5/index.html +++ b/_site/decision-theory-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_6/index.html b/_site/decision-theory-exercises/ex_6/index.html index 09e768739f..535a2b7c49 100644 --- a/_site/decision-theory-exercises/ex_6/index.html +++ b/_site/decision-theory-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_7/index.html b/_site/decision-theory-exercises/ex_7/index.html index f864e4e08f..a25dd5bc3f 100644 --- a/_site/decision-theory-exercises/ex_7/index.html +++ b/_site/decision-theory-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_8/index.html b/_site/decision-theory-exercises/ex_8/index.html index 89eb0c0bab..a9f9ef63d0 100644 --- a/_site/decision-theory-exercises/ex_8/index.html +++ b/_site/decision-theory-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/ex_9/index.html b/_site/decision-theory-exercises/ex_9/index.html index 364add189d..9f07dd38a7 100644 --- a/_site/decision-theory-exercises/ex_9/index.html +++ b/_site/decision-theory-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/decision-theory-exercises/index.html b/_site/decision-theory-exercises/index.html index 88a4c54e67..022606415f 100644 --- a/_site/decision-theory-exercises/index.html +++ b/_site/decision-theory-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_1/index.html b/_site/fol-exercises/ex_1/index.html index c9679f1313..fd421b3ecf 100644 --- a/_site/fol-exercises/ex_1/index.html +++ b/_site/fol-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_10/index.html b/_site/fol-exercises/ex_10/index.html index 0b2b73b6f1..927332435e 100644 --- a/_site/fol-exercises/ex_10/index.html +++ b/_site/fol-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_11/index.html b/_site/fol-exercises/ex_11/index.html index 3816a272ef..e4faff00e9 100644 --- a/_site/fol-exercises/ex_11/index.html +++ b/_site/fol-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_12/index.html b/_site/fol-exercises/ex_12/index.html index 6d542fc8dd..f124f25898 100644 --- a/_site/fol-exercises/ex_12/index.html +++ b/_site/fol-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_13/index.html b/_site/fol-exercises/ex_13/index.html index f80c9df5c1..5b191ef277 100644 --- a/_site/fol-exercises/ex_13/index.html +++ b/_site/fol-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_14/index.html b/_site/fol-exercises/ex_14/index.html index ccf81a493e..07c6970d5b 100644 --- a/_site/fol-exercises/ex_14/index.html +++ b/_site/fol-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_15/index.html b/_site/fol-exercises/ex_15/index.html index 6f0262d838..5ad222bf60 100644 --- a/_site/fol-exercises/ex_15/index.html +++ b/_site/fol-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_16/index.html b/_site/fol-exercises/ex_16/index.html index ef93b3f114..d448d92bb5 100644 --- a/_site/fol-exercises/ex_16/index.html +++ b/_site/fol-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_17/index.html b/_site/fol-exercises/ex_17/index.html index bf4659f722..f7fe987110 100644 --- a/_site/fol-exercises/ex_17/index.html +++ b/_site/fol-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_18/index.html b/_site/fol-exercises/ex_18/index.html index c1b2efa56f..f7fbc6deb4 100644 --- a/_site/fol-exercises/ex_18/index.html +++ b/_site/fol-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_19/index.html b/_site/fol-exercises/ex_19/index.html index c83601d0df..5ede73997d 100644 --- a/_site/fol-exercises/ex_19/index.html +++ b/_site/fol-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_2/index.html b/_site/fol-exercises/ex_2/index.html index f710690854..cf81a66821 100644 --- a/_site/fol-exercises/ex_2/index.html +++ b/_site/fol-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_20/index.html b/_site/fol-exercises/ex_20/index.html index 6f8bf1e90d..c50d117efa 100644 --- a/_site/fol-exercises/ex_20/index.html +++ b/_site/fol-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_21/index.html b/_site/fol-exercises/ex_21/index.html index d961e87bb5..0a077a58e7 100644 --- a/_site/fol-exercises/ex_21/index.html +++ b/_site/fol-exercises/ex_21/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_22/index.html b/_site/fol-exercises/ex_22/index.html index e97a518d47..138ba3dec8 100644 --- a/_site/fol-exercises/ex_22/index.html +++ b/_site/fol-exercises/ex_22/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_23/index.html b/_site/fol-exercises/ex_23/index.html index 8662b4c9b2..4a96a8aded 100644 --- a/_site/fol-exercises/ex_23/index.html +++ b/_site/fol-exercises/ex_23/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_24/index.html b/_site/fol-exercises/ex_24/index.html index 67ba801e90..4c60c5bc7f 100644 --- a/_site/fol-exercises/ex_24/index.html +++ b/_site/fol-exercises/ex_24/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_25/index.html b/_site/fol-exercises/ex_25/index.html index 52f5260d55..af00dc44c7 100644 --- a/_site/fol-exercises/ex_25/index.html +++ b/_site/fol-exercises/ex_25/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_26/index.html b/_site/fol-exercises/ex_26/index.html index 2c2f01b646..cc8a092f0e 100644 --- a/_site/fol-exercises/ex_26/index.html +++ b/_site/fol-exercises/ex_26/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_27/index.html b/_site/fol-exercises/ex_27/index.html index 00223fa7cd..a658cd13ae 100644 --- a/_site/fol-exercises/ex_27/index.html +++ b/_site/fol-exercises/ex_27/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_28/index.html b/_site/fol-exercises/ex_28/index.html index 8bc5cf6f05..c81bcdab30 100644 --- a/_site/fol-exercises/ex_28/index.html +++ b/_site/fol-exercises/ex_28/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_29/index.html b/_site/fol-exercises/ex_29/index.html index 6843f179ab..89699b7e34 100644 --- a/_site/fol-exercises/ex_29/index.html +++ b/_site/fol-exercises/ex_29/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_3/index.html b/_site/fol-exercises/ex_3/index.html index 8fd9debd28..a39b04e876 100644 --- a/_site/fol-exercises/ex_3/index.html +++ b/_site/fol-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_30/index.html b/_site/fol-exercises/ex_30/index.html index dca5dc7b50..7ddd3daf2d 100644 --- a/_site/fol-exercises/ex_30/index.html +++ b/_site/fol-exercises/ex_30/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_31/index.html b/_site/fol-exercises/ex_31/index.html index ce8d0ef8d3..4499156562 100644 --- a/_site/fol-exercises/ex_31/index.html +++ b/_site/fol-exercises/ex_31/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_32/index.html b/_site/fol-exercises/ex_32/index.html index 80571a47c6..149f68da26 100644 --- a/_site/fol-exercises/ex_32/index.html +++ b/_site/fol-exercises/ex_32/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_33/index.html b/_site/fol-exercises/ex_33/index.html index 0362476f43..3edc209150 100644 --- a/_site/fol-exercises/ex_33/index.html +++ b/_site/fol-exercises/ex_33/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_34/index.html b/_site/fol-exercises/ex_34/index.html index 3f6472bc2e..f4d2d7faea 100644 --- a/_site/fol-exercises/ex_34/index.html +++ b/_site/fol-exercises/ex_34/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_35/index.html b/_site/fol-exercises/ex_35/index.html index 65de1d8ffc..435151f5ac 100644 --- a/_site/fol-exercises/ex_35/index.html +++ b/_site/fol-exercises/ex_35/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_36/index.html b/_site/fol-exercises/ex_36/index.html index 0515e91242..da70559b10 100644 --- a/_site/fol-exercises/ex_36/index.html +++ b/_site/fol-exercises/ex_36/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_4/index.html b/_site/fol-exercises/ex_4/index.html index 7e25d437b0..ff18163df3 100644 --- a/_site/fol-exercises/ex_4/index.html +++ b/_site/fol-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_5/index.html b/_site/fol-exercises/ex_5/index.html index 8c8f8353fa..1161311c96 100644 --- a/_site/fol-exercises/ex_5/index.html +++ b/_site/fol-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_6/index.html b/_site/fol-exercises/ex_6/index.html index a4eed35e02..442bfccec0 100644 --- a/_site/fol-exercises/ex_6/index.html +++ b/_site/fol-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_7/index.html b/_site/fol-exercises/ex_7/index.html index 01c4a7b488..7afc1fcbf8 100644 --- a/_site/fol-exercises/ex_7/index.html +++ b/_site/fol-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_8/index.html b/_site/fol-exercises/ex_8/index.html index 69a2170e20..25d60285ee 100644 --- a/_site/fol-exercises/ex_8/index.html +++ b/_site/fol-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/ex_9/index.html b/_site/fol-exercises/ex_9/index.html index 725d9a2bfa..d360825a5f 100644 --- a/_site/fol-exercises/ex_9/index.html +++ b/_site/fol-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/fol-exercises/index.html b/_site/fol-exercises/index.html index 5dd626cf0f..67666ffe70 100644 --- a/_site/fol-exercises/index.html +++ b/_site/fol-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_1/index.html b/_site/game-playing-exercises/ex_1/index.html index 312ae7ef54..1fc3b662c3 100644 --- a/_site/game-playing-exercises/ex_1/index.html +++ b/_site/game-playing-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_10/index.html b/_site/game-playing-exercises/ex_10/index.html index d8127e71dd..7c84f2cd60 100644 --- a/_site/game-playing-exercises/ex_10/index.html +++ b/_site/game-playing-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_11/index.html b/_site/game-playing-exercises/ex_11/index.html index 354bf80eae..c40c34414b 100644 --- a/_site/game-playing-exercises/ex_11/index.html +++ b/_site/game-playing-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_12/index.html b/_site/game-playing-exercises/ex_12/index.html index 1bf581f637..de3821ab9c 100644 --- a/_site/game-playing-exercises/ex_12/index.html +++ b/_site/game-playing-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_13/index.html b/_site/game-playing-exercises/ex_13/index.html index 3b06873b1a..5c4d2d3a26 100644 --- a/_site/game-playing-exercises/ex_13/index.html +++ b/_site/game-playing-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_14/index.html b/_site/game-playing-exercises/ex_14/index.html index 0a45a6e9f6..412ccd19bf 100644 --- a/_site/game-playing-exercises/ex_14/index.html +++ b/_site/game-playing-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_15/index.html b/_site/game-playing-exercises/ex_15/index.html index c4dc021db3..cc1bcf9a48 100644 --- a/_site/game-playing-exercises/ex_15/index.html +++ b/_site/game-playing-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_16/index.html b/_site/game-playing-exercises/ex_16/index.html index 6856abfea3..cd5c1d57a7 100644 --- a/_site/game-playing-exercises/ex_16/index.html +++ b/_site/game-playing-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_17/index.html b/_site/game-playing-exercises/ex_17/index.html index 058714e7ca..f58fba0327 100644 --- a/_site/game-playing-exercises/ex_17/index.html +++ b/_site/game-playing-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_18/index.html b/_site/game-playing-exercises/ex_18/index.html index 18b16d88c7..f39defdb17 100644 --- a/_site/game-playing-exercises/ex_18/index.html +++ b/_site/game-playing-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_19/index.html b/_site/game-playing-exercises/ex_19/index.html index fde934f3df..361ab699b0 100644 --- a/_site/game-playing-exercises/ex_19/index.html +++ b/_site/game-playing-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_2/index.html b/_site/game-playing-exercises/ex_2/index.html index 49fff98411..1012eec9a5 100644 --- a/_site/game-playing-exercises/ex_2/index.html +++ b/_site/game-playing-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_20/index.html b/_site/game-playing-exercises/ex_20/index.html index 9cf1491d86..b4c8cb910e 100644 --- a/_site/game-playing-exercises/ex_20/index.html +++ b/_site/game-playing-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_21/index.html b/_site/game-playing-exercises/ex_21/index.html index 62ded56ebf..28a5401910 100644 --- a/_site/game-playing-exercises/ex_21/index.html +++ b/_site/game-playing-exercises/ex_21/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_22/index.html b/_site/game-playing-exercises/ex_22/index.html index ee7667c915..e72cd6a46b 100644 --- a/_site/game-playing-exercises/ex_22/index.html +++ b/_site/game-playing-exercises/ex_22/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_23/index.html b/_site/game-playing-exercises/ex_23/index.html index bb6bb71421..635a4b29cf 100644 --- a/_site/game-playing-exercises/ex_23/index.html +++ b/_site/game-playing-exercises/ex_23/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_24/index.html b/_site/game-playing-exercises/ex_24/index.html index 6b7006c4b4..80c9988a8c 100644 --- a/_site/game-playing-exercises/ex_24/index.html +++ b/_site/game-playing-exercises/ex_24/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_25/index.html b/_site/game-playing-exercises/ex_25/index.html index 1276548425..dbd1232e98 100644 --- a/_site/game-playing-exercises/ex_25/index.html +++ b/_site/game-playing-exercises/ex_25/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_3/index.html b/_site/game-playing-exercises/ex_3/index.html index 32360ec871..cf6bf6f051 100644 --- a/_site/game-playing-exercises/ex_3/index.html +++ b/_site/game-playing-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_4/index.html b/_site/game-playing-exercises/ex_4/index.html index fc5acb1f70..67854afbf5 100644 --- a/_site/game-playing-exercises/ex_4/index.html +++ b/_site/game-playing-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_5/index.html b/_site/game-playing-exercises/ex_5/index.html index f4ab375377..7e0b687f4a 100644 --- a/_site/game-playing-exercises/ex_5/index.html +++ b/_site/game-playing-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_6/index.html b/_site/game-playing-exercises/ex_6/index.html index 6b4a0fbe53..141ad85e57 100644 --- a/_site/game-playing-exercises/ex_6/index.html +++ b/_site/game-playing-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_7/index.html b/_site/game-playing-exercises/ex_7/index.html index 1b2b72fbb8..f92be13dfa 100644 --- a/_site/game-playing-exercises/ex_7/index.html +++ b/_site/game-playing-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_8/index.html b/_site/game-playing-exercises/ex_8/index.html index dc7737bb89..3191f86bba 100644 --- a/_site/game-playing-exercises/ex_8/index.html +++ b/_site/game-playing-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/ex_9/index.html b/_site/game-playing-exercises/ex_9/index.html index 87615e14d6..6a14f099f0 100644 --- a/_site/game-playing-exercises/ex_9/index.html +++ b/_site/game-playing-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/game-playing-exercises/index.html b/_site/game-playing-exercises/index.html index a781193d0f..633742e663 100644 --- a/_site/game-playing-exercises/index.html +++ b/_site/game-playing-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/ilp-exercises/ex_1/index.html b/_site/ilp-exercises/ex_1/index.html index 189d58fa72..2aba00fb10 100644 --- a/_site/ilp-exercises/ex_1/index.html +++ b/_site/ilp-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/ilp-exercises/ex_2/index.html b/_site/ilp-exercises/ex_2/index.html index 20a871840e..bae43f9d60 100644 --- a/_site/ilp-exercises/ex_2/index.html +++ b/_site/ilp-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/ilp-exercises/ex_3/index.html b/_site/ilp-exercises/ex_3/index.html index 3ae095b1fd..46dcc0df10 100644 --- a/_site/ilp-exercises/ex_3/index.html +++ b/_site/ilp-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/ilp-exercises/ex_4/index.html b/_site/ilp-exercises/ex_4/index.html index 2d07af0fad..7d1f83ae3e 100644 --- a/_site/ilp-exercises/ex_4/index.html +++ b/_site/ilp-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/ilp-exercises/ex_5/index.html b/_site/ilp-exercises/ex_5/index.html index 49873e00e5..e78e1f1aac 100644 --- a/_site/ilp-exercises/ex_5/index.html +++ b/_site/ilp-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/ilp-exercises/ex_6/index.html b/_site/ilp-exercises/ex_6/index.html index 6fc95aadfb..35b19843fd 100644 --- a/_site/ilp-exercises/ex_6/index.html +++ b/_site/ilp-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/ilp-exercises/ex_7/index.html b/_site/ilp-exercises/ex_7/index.html index c78d5a7836..4270e89ddf 100644 --- a/_site/ilp-exercises/ex_7/index.html +++ b/_site/ilp-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/ilp-exercises/ex_8/index.html b/_site/ilp-exercises/ex_8/index.html index 5f56977138..3761fac442 100644 --- a/_site/ilp-exercises/ex_8/index.html +++ b/_site/ilp-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/ilp-exercises/index.html b/_site/ilp-exercises/index.html index 1c938018b2..944186b57d 100644 --- a/_site/ilp-exercises/index.html +++ b/_site/ilp-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/index.html b/_site/index.html index 6b06c71bf1..c8730e287c 100644 --- a/_site/index.html +++ b/_site/index.html @@ -55,7 +55,7 @@ diff --git a/_site/intro-exercises/ex_1/index.html b/_site/intro-exercises/ex_1/index.html index 034917ab02..200637eff1 100644 --- a/_site/intro-exercises/ex_1/index.html +++ b/_site/intro-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_10/index.html b/_site/intro-exercises/ex_10/index.html index 487458fc89..1521fcd53f 100644 --- a/_site/intro-exercises/ex_10/index.html +++ b/_site/intro-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_11/index.html b/_site/intro-exercises/ex_11/index.html index b50d104036..6fbfdf323d 100644 --- a/_site/intro-exercises/ex_11/index.html +++ b/_site/intro-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_12/index.html b/_site/intro-exercises/ex_12/index.html index 604474fcfe..64a9490bd1 100644 --- a/_site/intro-exercises/ex_12/index.html +++ b/_site/intro-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_13/index.html b/_site/intro-exercises/ex_13/index.html index 8a2cc319c6..86592835f7 100644 --- a/_site/intro-exercises/ex_13/index.html +++ b/_site/intro-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_14/index.html b/_site/intro-exercises/ex_14/index.html index 8001d1218e..abfd1869a8 100644 --- a/_site/intro-exercises/ex_14/index.html +++ b/_site/intro-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_15/index.html b/_site/intro-exercises/ex_15/index.html index 488f180359..f3cd0f8fd7 100644 --- a/_site/intro-exercises/ex_15/index.html +++ b/_site/intro-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_16/index.html b/_site/intro-exercises/ex_16/index.html index b5d54a0142..4a372abc6f 100644 --- a/_site/intro-exercises/ex_16/index.html +++ b/_site/intro-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_17/index.html b/_site/intro-exercises/ex_17/index.html index 003b5141da..bbc5d841cf 100644 --- a/_site/intro-exercises/ex_17/index.html +++ b/_site/intro-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_18/index.html b/_site/intro-exercises/ex_18/index.html index d4ce8ca64a..4aef6181be 100644 --- a/_site/intro-exercises/ex_18/index.html +++ b/_site/intro-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_19/index.html b/_site/intro-exercises/ex_19/index.html index 6e2ed1a32f..b925230e1f 100644 --- a/_site/intro-exercises/ex_19/index.html +++ b/_site/intro-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_2/index.html b/_site/intro-exercises/ex_2/index.html index 72a311d740..9c45289c0c 100644 --- a/_site/intro-exercises/ex_2/index.html +++ b/_site/intro-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_20/index.html b/_site/intro-exercises/ex_20/index.html index c4d3a8a65d..901d66ed87 100644 --- a/_site/intro-exercises/ex_20/index.html +++ b/_site/intro-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_3/index.html b/_site/intro-exercises/ex_3/index.html index 1d62022bac..c855970d20 100644 --- a/_site/intro-exercises/ex_3/index.html +++ b/_site/intro-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_4/index.html b/_site/intro-exercises/ex_4/index.html index ca71d6b0d3..2ee355a2ca 100644 --- a/_site/intro-exercises/ex_4/index.html +++ b/_site/intro-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_5/index.html b/_site/intro-exercises/ex_5/index.html index df8c046789..ea4a468029 100644 --- a/_site/intro-exercises/ex_5/index.html +++ b/_site/intro-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_6/index.html b/_site/intro-exercises/ex_6/index.html index 6154642f6c..4b56e3a282 100644 --- a/_site/intro-exercises/ex_6/index.html +++ b/_site/intro-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_7/index.html b/_site/intro-exercises/ex_7/index.html index 0d8e11df3a..a0ed54d02b 100644 --- a/_site/intro-exercises/ex_7/index.html +++ b/_site/intro-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_8/index.html b/_site/intro-exercises/ex_8/index.html index f5b3e83375..7c1c46366f 100644 --- a/_site/intro-exercises/ex_8/index.html +++ b/_site/intro-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/ex_9/index.html b/_site/intro-exercises/ex_9/index.html index 5b7fc485a1..8a549d30b4 100644 --- a/_site/intro-exercises/ex_9/index.html +++ b/_site/intro-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/intro-exercises/index.html b/_site/intro-exercises/index.html index 8a398d8ff2..a2dd0e37a4 100644 --- a/_site/intro-exercises/index.html +++ b/_site/intro-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_1/index.html b/_site/knowledge-logic-exercises/ex_1/index.html index 53a883465e..c9f5c7e3c6 100644 --- a/_site/knowledge-logic-exercises/ex_1/index.html +++ b/_site/knowledge-logic-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_10/index.html b/_site/knowledge-logic-exercises/ex_10/index.html index 861b775803..c6ecdd1828 100644 --- a/_site/knowledge-logic-exercises/ex_10/index.html +++ b/_site/knowledge-logic-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_11/index.html b/_site/knowledge-logic-exercises/ex_11/index.html index 839e02040e..1aecd30e95 100644 --- a/_site/knowledge-logic-exercises/ex_11/index.html +++ b/_site/knowledge-logic-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_12/index.html b/_site/knowledge-logic-exercises/ex_12/index.html index 3354f97181..99abc88b18 100644 --- a/_site/knowledge-logic-exercises/ex_12/index.html +++ b/_site/knowledge-logic-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_13/index.html b/_site/knowledge-logic-exercises/ex_13/index.html index e5bdd983bd..8a6e0b34c2 100644 --- a/_site/knowledge-logic-exercises/ex_13/index.html +++ b/_site/knowledge-logic-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_14/index.html b/_site/knowledge-logic-exercises/ex_14/index.html index 20fa447977..093d63a3c1 100644 --- a/_site/knowledge-logic-exercises/ex_14/index.html +++ b/_site/knowledge-logic-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_15/index.html b/_site/knowledge-logic-exercises/ex_15/index.html index 0af6eeda4a..2a87ce37b0 100644 --- a/_site/knowledge-logic-exercises/ex_15/index.html +++ b/_site/knowledge-logic-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_16/index.html b/_site/knowledge-logic-exercises/ex_16/index.html index e60b7da288..ab2d02417d 100644 --- a/_site/knowledge-logic-exercises/ex_16/index.html +++ b/_site/knowledge-logic-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_17/index.html b/_site/knowledge-logic-exercises/ex_17/index.html index 3f9eba76ec..f775ba7d24 100644 --- a/_site/knowledge-logic-exercises/ex_17/index.html +++ b/_site/knowledge-logic-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_18/index.html b/_site/knowledge-logic-exercises/ex_18/index.html index ab40c4cd30..e0d84279ba 100644 --- a/_site/knowledge-logic-exercises/ex_18/index.html +++ b/_site/knowledge-logic-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_19/index.html b/_site/knowledge-logic-exercises/ex_19/index.html index e80e3ad362..0814707b0d 100644 --- a/_site/knowledge-logic-exercises/ex_19/index.html +++ b/_site/knowledge-logic-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_2/index.html b/_site/knowledge-logic-exercises/ex_2/index.html index e8fb138b7f..f30e737347 100644 --- a/_site/knowledge-logic-exercises/ex_2/index.html +++ b/_site/knowledge-logic-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_20/index.html b/_site/knowledge-logic-exercises/ex_20/index.html index ef8b5e4ed5..0405051b5e 100644 --- a/_site/knowledge-logic-exercises/ex_20/index.html +++ b/_site/knowledge-logic-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_21/index.html b/_site/knowledge-logic-exercises/ex_21/index.html index 6ab720999f..f241681121 100644 --- a/_site/knowledge-logic-exercises/ex_21/index.html +++ b/_site/knowledge-logic-exercises/ex_21/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_22/index.html b/_site/knowledge-logic-exercises/ex_22/index.html index 74f3cb9db2..e69af5deed 100644 --- a/_site/knowledge-logic-exercises/ex_22/index.html +++ b/_site/knowledge-logic-exercises/ex_22/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_23/index.html b/_site/knowledge-logic-exercises/ex_23/index.html index 9df87311e2..b1f8cf91ef 100644 --- a/_site/knowledge-logic-exercises/ex_23/index.html +++ b/_site/knowledge-logic-exercises/ex_23/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_24/index.html b/_site/knowledge-logic-exercises/ex_24/index.html index af1b19105b..55356bcb0c 100644 --- a/_site/knowledge-logic-exercises/ex_24/index.html +++ b/_site/knowledge-logic-exercises/ex_24/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_25/index.html b/_site/knowledge-logic-exercises/ex_25/index.html index 7bba2965ef..261d974bbc 100644 --- a/_site/knowledge-logic-exercises/ex_25/index.html +++ b/_site/knowledge-logic-exercises/ex_25/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_26/index.html b/_site/knowledge-logic-exercises/ex_26/index.html index c64cf3f589..9327a965a3 100644 --- a/_site/knowledge-logic-exercises/ex_26/index.html +++ b/_site/knowledge-logic-exercises/ex_26/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_27/index.html b/_site/knowledge-logic-exercises/ex_27/index.html index 9b254db01b..2d41cb36e5 100644 --- a/_site/knowledge-logic-exercises/ex_27/index.html +++ b/_site/knowledge-logic-exercises/ex_27/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_28/index.html b/_site/knowledge-logic-exercises/ex_28/index.html index 74cb1a2e6d..3b3b2adcea 100644 --- a/_site/knowledge-logic-exercises/ex_28/index.html +++ b/_site/knowledge-logic-exercises/ex_28/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_29/index.html b/_site/knowledge-logic-exercises/ex_29/index.html index 771c672820..10d786df94 100644 --- a/_site/knowledge-logic-exercises/ex_29/index.html +++ b/_site/knowledge-logic-exercises/ex_29/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_3/index.html b/_site/knowledge-logic-exercises/ex_3/index.html index 23a944434b..ca2df81c52 100644 --- a/_site/knowledge-logic-exercises/ex_3/index.html +++ b/_site/knowledge-logic-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_30/index.html b/_site/knowledge-logic-exercises/ex_30/index.html index cc77a2305d..74b5aca9c5 100644 --- a/_site/knowledge-logic-exercises/ex_30/index.html +++ b/_site/knowledge-logic-exercises/ex_30/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_31/index.html b/_site/knowledge-logic-exercises/ex_31/index.html index cc1e0c5acc..dddd246053 100644 --- a/_site/knowledge-logic-exercises/ex_31/index.html +++ b/_site/knowledge-logic-exercises/ex_31/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_32/index.html b/_site/knowledge-logic-exercises/ex_32/index.html index 19bdcfc05a..3429a62936 100644 --- a/_site/knowledge-logic-exercises/ex_32/index.html +++ b/_site/knowledge-logic-exercises/ex_32/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_33/index.html b/_site/knowledge-logic-exercises/ex_33/index.html index 8dbae01751..87a31aceb1 100644 --- a/_site/knowledge-logic-exercises/ex_33/index.html +++ b/_site/knowledge-logic-exercises/ex_33/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_34/index.html b/_site/knowledge-logic-exercises/ex_34/index.html index f23250ebb5..23b30a083e 100644 --- a/_site/knowledge-logic-exercises/ex_34/index.html +++ b/_site/knowledge-logic-exercises/ex_34/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_35/index.html b/_site/knowledge-logic-exercises/ex_35/index.html index 3c7c0863dc..16f835b026 100644 --- a/_site/knowledge-logic-exercises/ex_35/index.html +++ b/_site/knowledge-logic-exercises/ex_35/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_4/index.html b/_site/knowledge-logic-exercises/ex_4/index.html index 4b5299d86f..34c283a017 100644 --- a/_site/knowledge-logic-exercises/ex_4/index.html +++ b/_site/knowledge-logic-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_5/index.html b/_site/knowledge-logic-exercises/ex_5/index.html index 685a3e06da..407bb82035 100644 --- a/_site/knowledge-logic-exercises/ex_5/index.html +++ b/_site/knowledge-logic-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_6/index.html b/_site/knowledge-logic-exercises/ex_6/index.html index aecf2ab37d..b7a88ecf90 100644 --- a/_site/knowledge-logic-exercises/ex_6/index.html +++ b/_site/knowledge-logic-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_7/index.html b/_site/knowledge-logic-exercises/ex_7/index.html index 3b8af11e9b..9fbf1ab044 100644 --- a/_site/knowledge-logic-exercises/ex_7/index.html +++ b/_site/knowledge-logic-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_8/index.html b/_site/knowledge-logic-exercises/ex_8/index.html index 30f36dc2c4..26c33fa91c 100644 --- a/_site/knowledge-logic-exercises/ex_8/index.html +++ b/_site/knowledge-logic-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/ex_9/index.html b/_site/knowledge-logic-exercises/ex_9/index.html index 6345955a96..5c82567f83 100644 --- a/_site/knowledge-logic-exercises/ex_9/index.html +++ b/_site/knowledge-logic-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/knowledge-logic-exercises/index.html b/_site/knowledge-logic-exercises/index.html index 3307cebd23..8586523ec4 100644 --- a/_site/knowledge-logic-exercises/index.html +++ b/_site/knowledge-logic-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_1/index.html b/_site/kr-exercises/ex_1/index.html index cf0f8f9a3f..be6dc9c30f 100644 --- a/_site/kr-exercises/ex_1/index.html +++ b/_site/kr-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_10/index.html b/_site/kr-exercises/ex_10/index.html index db767f11e9..da4cabaa32 100644 --- a/_site/kr-exercises/ex_10/index.html +++ b/_site/kr-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_11/index.html b/_site/kr-exercises/ex_11/index.html index f1111d0cca..e49237ea8b 100644 --- a/_site/kr-exercises/ex_11/index.html +++ b/_site/kr-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_12/index.html b/_site/kr-exercises/ex_12/index.html index 121493beca..8951bd4502 100644 --- a/_site/kr-exercises/ex_12/index.html +++ b/_site/kr-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_13/index.html b/_site/kr-exercises/ex_13/index.html index cb4acc189a..e63c42b08f 100644 --- a/_site/kr-exercises/ex_13/index.html +++ b/_site/kr-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_14/index.html b/_site/kr-exercises/ex_14/index.html index 087f1634f1..edc3a73b1e 100644 --- a/_site/kr-exercises/ex_14/index.html +++ b/_site/kr-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_15/index.html b/_site/kr-exercises/ex_15/index.html index 98153de19a..4c8f06500d 100644 --- a/_site/kr-exercises/ex_15/index.html +++ b/_site/kr-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_16/index.html b/_site/kr-exercises/ex_16/index.html index fc44e7605f..0884d2f0ed 100644 --- a/_site/kr-exercises/ex_16/index.html +++ b/_site/kr-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_17/index.html b/_site/kr-exercises/ex_17/index.html index 2dfb74ba85..103dbb0756 100644 --- a/_site/kr-exercises/ex_17/index.html +++ b/_site/kr-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_18/index.html b/_site/kr-exercises/ex_18/index.html index 3182c9f15a..99cba6d2e4 100644 --- a/_site/kr-exercises/ex_18/index.html +++ b/_site/kr-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_19/index.html b/_site/kr-exercises/ex_19/index.html index 23e01615cc..e55d1656c5 100644 --- a/_site/kr-exercises/ex_19/index.html +++ b/_site/kr-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_2/index.html b/_site/kr-exercises/ex_2/index.html index 95bf3c3871..1fe097d988 100644 --- a/_site/kr-exercises/ex_2/index.html +++ b/_site/kr-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_20/index.html b/_site/kr-exercises/ex_20/index.html index 42c53930fb..224a46fedd 100644 --- a/_site/kr-exercises/ex_20/index.html +++ b/_site/kr-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_21/index.html b/_site/kr-exercises/ex_21/index.html index f3df25c381..76053b8e6b 100644 --- a/_site/kr-exercises/ex_21/index.html +++ b/_site/kr-exercises/ex_21/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_22/index.html b/_site/kr-exercises/ex_22/index.html index a69049d5a4..c7e7de039d 100644 --- a/_site/kr-exercises/ex_22/index.html +++ b/_site/kr-exercises/ex_22/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_23/index.html b/_site/kr-exercises/ex_23/index.html index 284797f48b..8b05dd6bd3 100644 --- a/_site/kr-exercises/ex_23/index.html +++ b/_site/kr-exercises/ex_23/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_24/index.html b/_site/kr-exercises/ex_24/index.html index f04c9edab6..53b8c3acbc 100644 --- a/_site/kr-exercises/ex_24/index.html +++ b/_site/kr-exercises/ex_24/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_25/index.html b/_site/kr-exercises/ex_25/index.html index 7f59a650c1..329ddf7372 100644 --- a/_site/kr-exercises/ex_25/index.html +++ b/_site/kr-exercises/ex_25/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_26/index.html b/_site/kr-exercises/ex_26/index.html index 1d07d23779..30df753e99 100644 --- a/_site/kr-exercises/ex_26/index.html +++ b/_site/kr-exercises/ex_26/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_27/index.html b/_site/kr-exercises/ex_27/index.html index 2e8c0f60bc..ec25b0b06b 100644 --- a/_site/kr-exercises/ex_27/index.html +++ b/_site/kr-exercises/ex_27/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_28/index.html b/_site/kr-exercises/ex_28/index.html index dc808ce36e..bf10784fe0 100644 --- a/_site/kr-exercises/ex_28/index.html +++ b/_site/kr-exercises/ex_28/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_29/index.html b/_site/kr-exercises/ex_29/index.html index 36fa3cd3ff..36c1b44e7e 100644 --- a/_site/kr-exercises/ex_29/index.html +++ b/_site/kr-exercises/ex_29/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_3/index.html b/_site/kr-exercises/ex_3/index.html index 81258f4653..fd48d0bfe0 100644 --- a/_site/kr-exercises/ex_3/index.html +++ b/_site/kr-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_30/index.html b/_site/kr-exercises/ex_30/index.html index 1c9994a4dd..90d52ffc55 100644 --- a/_site/kr-exercises/ex_30/index.html +++ b/_site/kr-exercises/ex_30/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_4/index.html b/_site/kr-exercises/ex_4/index.html index 4b2a7e4ee0..0e44c7071f 100644 --- a/_site/kr-exercises/ex_4/index.html +++ b/_site/kr-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_5/index.html b/_site/kr-exercises/ex_5/index.html index 4de4359cc2..a23236caae 100644 --- a/_site/kr-exercises/ex_5/index.html +++ b/_site/kr-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_6/index.html b/_site/kr-exercises/ex_6/index.html index 7ca618d17e..15e6270f37 100644 --- a/_site/kr-exercises/ex_6/index.html +++ b/_site/kr-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_7/index.html b/_site/kr-exercises/ex_7/index.html index 9965be413b..af79925cd4 100644 --- a/_site/kr-exercises/ex_7/index.html +++ b/_site/kr-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_8/index.html b/_site/kr-exercises/ex_8/index.html index 5abf666e28..bb448ad1fb 100644 --- a/_site/kr-exercises/ex_8/index.html +++ b/_site/kr-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/ex_9/index.html b/_site/kr-exercises/ex_9/index.html index 9e603fc322..b9934d1583 100644 --- a/_site/kr-exercises/ex_9/index.html +++ b/_site/kr-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/kr-exercises/index.html b/_site/kr-exercises/index.html index 41da9d83ff..7d9f14f265 100644 --- a/_site/kr-exercises/index.html +++ b/_site/kr-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_1/index.html b/_site/logical-inference-exercises/ex_1/index.html index 7a5e2c474b..8591067376 100644 --- a/_site/logical-inference-exercises/ex_1/index.html +++ b/_site/logical-inference-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_10/index.html b/_site/logical-inference-exercises/ex_10/index.html index 421ebffa43..617a330238 100644 --- a/_site/logical-inference-exercises/ex_10/index.html +++ b/_site/logical-inference-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_11/index.html b/_site/logical-inference-exercises/ex_11/index.html index e6cfdc2c4d..4547f0ddae 100644 --- a/_site/logical-inference-exercises/ex_11/index.html +++ b/_site/logical-inference-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_12/index.html b/_site/logical-inference-exercises/ex_12/index.html index cd0609cfff..50c6f299f4 100644 --- a/_site/logical-inference-exercises/ex_12/index.html +++ b/_site/logical-inference-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_13/index.html b/_site/logical-inference-exercises/ex_13/index.html index fdf7395588..cca6376b21 100644 --- a/_site/logical-inference-exercises/ex_13/index.html +++ b/_site/logical-inference-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_14/index.html b/_site/logical-inference-exercises/ex_14/index.html index 56459ef790..680023d430 100644 --- a/_site/logical-inference-exercises/ex_14/index.html +++ b/_site/logical-inference-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_15/index.html b/_site/logical-inference-exercises/ex_15/index.html index 8b0a5396f5..07129e8f74 100644 --- a/_site/logical-inference-exercises/ex_15/index.html +++ b/_site/logical-inference-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_16/index.html b/_site/logical-inference-exercises/ex_16/index.html index 6f29322999..f19cc15b79 100644 --- a/_site/logical-inference-exercises/ex_16/index.html +++ b/_site/logical-inference-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_17/index.html b/_site/logical-inference-exercises/ex_17/index.html index 8d48bc7a20..2268501992 100644 --- a/_site/logical-inference-exercises/ex_17/index.html +++ b/_site/logical-inference-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_18/index.html b/_site/logical-inference-exercises/ex_18/index.html index 7055207788..32952ea8d1 100644 --- a/_site/logical-inference-exercises/ex_18/index.html +++ b/_site/logical-inference-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_19/index.html b/_site/logical-inference-exercises/ex_19/index.html index 75887f8065..11eb0cf565 100644 --- a/_site/logical-inference-exercises/ex_19/index.html +++ b/_site/logical-inference-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_2/index.html b/_site/logical-inference-exercises/ex_2/index.html index 5fac119681..876bcc699a 100644 --- a/_site/logical-inference-exercises/ex_2/index.html +++ b/_site/logical-inference-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_20/index.html b/_site/logical-inference-exercises/ex_20/index.html index fea9462f08..3c3b9024f9 100644 --- a/_site/logical-inference-exercises/ex_20/index.html +++ b/_site/logical-inference-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_21/index.html b/_site/logical-inference-exercises/ex_21/index.html index 024a110664..d4ecaf8761 100644 --- a/_site/logical-inference-exercises/ex_21/index.html +++ b/_site/logical-inference-exercises/ex_21/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_22/index.html b/_site/logical-inference-exercises/ex_22/index.html index 66edec0823..a3fc77073a 100644 --- a/_site/logical-inference-exercises/ex_22/index.html +++ b/_site/logical-inference-exercises/ex_22/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_23/index.html b/_site/logical-inference-exercises/ex_23/index.html index e42fb4abbc..b97e2fa766 100644 --- a/_site/logical-inference-exercises/ex_23/index.html +++ b/_site/logical-inference-exercises/ex_23/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_24/index.html b/_site/logical-inference-exercises/ex_24/index.html index 6548a84d6a..c6b3e1007a 100644 --- a/_site/logical-inference-exercises/ex_24/index.html +++ b/_site/logical-inference-exercises/ex_24/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_25/index.html b/_site/logical-inference-exercises/ex_25/index.html index 4cf30ed5ab..616b95af18 100644 --- a/_site/logical-inference-exercises/ex_25/index.html +++ b/_site/logical-inference-exercises/ex_25/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_26/index.html b/_site/logical-inference-exercises/ex_26/index.html index 67be6eb55d..898818dd86 100644 --- a/_site/logical-inference-exercises/ex_26/index.html +++ b/_site/logical-inference-exercises/ex_26/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_27/index.html b/_site/logical-inference-exercises/ex_27/index.html index cd81034ce3..2c005fe5e4 100644 --- a/_site/logical-inference-exercises/ex_27/index.html +++ b/_site/logical-inference-exercises/ex_27/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_28/index.html b/_site/logical-inference-exercises/ex_28/index.html index 29fa52a7dc..14266224b6 100644 --- a/_site/logical-inference-exercises/ex_28/index.html +++ b/_site/logical-inference-exercises/ex_28/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_29/index.html b/_site/logical-inference-exercises/ex_29/index.html index b710b7fd6d..07bb6d6d8e 100644 --- a/_site/logical-inference-exercises/ex_29/index.html +++ b/_site/logical-inference-exercises/ex_29/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_3/index.html b/_site/logical-inference-exercises/ex_3/index.html index 83b5a058ee..bf215219b5 100644 --- a/_site/logical-inference-exercises/ex_3/index.html +++ b/_site/logical-inference-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_30/index.html b/_site/logical-inference-exercises/ex_30/index.html index 3f9a3fc956..7391dcf38c 100644 --- a/_site/logical-inference-exercises/ex_30/index.html +++ b/_site/logical-inference-exercises/ex_30/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_31/index.html b/_site/logical-inference-exercises/ex_31/index.html index 79b33ea4f1..17640ef843 100644 --- a/_site/logical-inference-exercises/ex_31/index.html +++ b/_site/logical-inference-exercises/ex_31/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_4/index.html b/_site/logical-inference-exercises/ex_4/index.html index 032a23a920..9ed9f9b97c 100644 --- a/_site/logical-inference-exercises/ex_4/index.html +++ b/_site/logical-inference-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_5/index.html b/_site/logical-inference-exercises/ex_5/index.html index f60282ec0e..a89709aeb0 100644 --- a/_site/logical-inference-exercises/ex_5/index.html +++ b/_site/logical-inference-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_6/index.html b/_site/logical-inference-exercises/ex_6/index.html index f7620e574b..6dad846f6a 100644 --- a/_site/logical-inference-exercises/ex_6/index.html +++ b/_site/logical-inference-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_7/index.html b/_site/logical-inference-exercises/ex_7/index.html index 49ddcd9d13..1575adc90c 100644 --- a/_site/logical-inference-exercises/ex_7/index.html +++ b/_site/logical-inference-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_8/index.html b/_site/logical-inference-exercises/ex_8/index.html index bc84c87484..7628fe9ace 100644 --- a/_site/logical-inference-exercises/ex_8/index.html +++ b/_site/logical-inference-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/ex_9/index.html b/_site/logical-inference-exercises/ex_9/index.html index 519eb0ba22..55b154fdce 100644 --- a/_site/logical-inference-exercises/ex_9/index.html +++ b/_site/logical-inference-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/logical-inference-exercises/index.html b/_site/logical-inference-exercises/index.html index 783b35e3a7..c48135c25c 100644 --- a/_site/logical-inference-exercises/index.html +++ b/_site/logical-inference-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_1/question.md b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_1/question.md index 74f6e6588d..705a1456e1 100644 --- a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_1/question.md +++ b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_1/question.md @@ -1,3 +1,3 @@ -Show from first principles that $P(a{{\,|\,}}b\land a) = 1$. +Show from first principles that $P(a $|$ b\land a) = 1$. diff --git a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_20/question.md b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_20/question.md index 13e3c5212d..c9058f9036 100644 --- a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_20/question.md +++ b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_20/question.md @@ -8,7 +8,7 @@ general versions of the product rule and Bayes’ rule, with respect to some background evidence $\textbf{e}$:
1. Prove the conditionalized version of the general product rule: - $${\textbf{P}}(X,Y {{\,|\,}}\textbf{e}) = {\textbf{P}}(X{{\,|\,}}Y,\textbf{e}) {\textbf{P}}(Y{{\,|\,}}\textbf{e})\ .$$
+ $${\textbf{P}}(X,Y $|$\textbf{e}) = {\textbf{P}}(X$|$Y,\textbf{e}) {\textbf{P}}(Y$|$\textbf{e})\ .$$
2. Prove the conditionalized version of Bayes’ rule in Equation (conditional-bayes-equation).
diff --git a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_23/question.md b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_23/question.md index e6f864fe9e..5bf338810c 100644 --- a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_23/question.md +++ b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_23/question.md @@ -2,8 +2,8 @@ In this exercise, you will complete the normalization calculation for the meningitis example. First, make up a -suitable value for $P(s{{\,|\,}}\lnot m)$, and use it to calculate -unnormalized values for $P(m{{\,|\,}}s)$ and $P(\lnot m {{\,|\,}}s)$ +suitable value for $P(s$|$\lnot m)$, and use it to calculate +unnormalized values for $P(m$|$s)$ and $P(\lnot m $|$s)$ (i.e., ignoring the $P(s)$ term in the Bayes’ rule expression, Equation (meningitis-bayes-equation). Now normalize these values so that they add to 1. diff --git a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_24/question.md b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_24/question.md index 88c7891cda..a2dcd974a1 100644 --- a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_24/question.md +++ b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_24/question.md @@ -4,22 +4,22 @@ This exercise investigates the way in which conditional independence relationships affect the amount of information needed for probabilistic calculations.
-1. Suppose we wish to calculate $P(h{{\,|\,}}e_1,e_2)$ and we have no +1. Suppose we wish to calculate $P(h$|$e_1,e_2)$ and we have no conditional independence information. Which of the following sets of numbers are sufficient for the calculation?
1. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1{{\,|\,}}H)$, - ${\textbf{P}}(E_2{{\,|\,}}H)$ + ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$ 2. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1,E_2{{\,|\,}}H)$
+ ${\textbf{P}}(E_1,E_2$|$H)$
3. ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1{{\,|\,}}H)$, - ${\textbf{P}}(E_2{{\,|\,}}H)$
+ ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$
2. Suppose we know that - ${\textbf{P}}(E_1{{\,|\,}}H,E_2)={\textbf{P}}(E_1{{\,|\,}}H)$ + ${\textbf{P}}(E_1$|$H,E_2)={\textbf{P}}(E_1$|$H)$ for all values of $H$, $E_1$, $E_2$. Now which of the three sets are sufficient? diff --git a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_27/question.md b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_27/question.md index 2b535600eb..464028cde9 100644 --- a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_27/question.md +++ b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_27/question.md @@ -1,6 +1,6 @@ Write out a general algorithm for answering queries of the form -${\textbf{P}}({Cause}{{\,|\,}}\textbf{e})$, using a naive Bayes +${\textbf{P}}({Cause}$|$\textbf{e})$, using a naive Bayes distribution. Assume that the evidence $\textbf{e}$ may assign values to any subset of the effect variables. diff --git a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_3/question.md b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_3/question.md index 17ebccb595..2dbfb8c7c0 100644 --- a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_3/question.md +++ b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_3/question.md @@ -3,10 +3,10 @@ For each of the following statements, either prove it is true or give a counterexample.
-1. If $P(a {{\,|\,}}b, c) = P(b {{\,|\,}}a, c)$, then - $P(a {{\,|\,}}c) = P(b {{\,|\,}}c)$
+1. If $P(a $|$b, c) = P(b $|$a, c)$, then + $P(a $|$c) = P(b $|$c)$
-2. If $P(a {{\,|\,}}b, c) = P(a)$, then $P(b {{\,|\,}}c) = P(b)$
+2. If $P(a $|$b, c) = P(a)$, then $P(b $|$c) = P(b)$
-3. If $P(a {{\,|\,}}b) = P(a)$, then - $P(a {{\,|\,}}b, c) = P(a {{\,|\,}}c)$
+3. If $P(a $|$b) = P(a)$, then + $P(a $|$b, c) = P(a $|$c)$
diff --git a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_8/question.md b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_8/question.md index 13d3199cb7..d82500041c 100644 --- a/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_8/question.md +++ b/_site/markdown/13-Quantifying-Uncertainity/exercises/ex_8/question.md @@ -7,6 +7,6 @@ Figure LG-network-page
. 1. In a two-variable network, let $X_1$ be the parent of $X_2$, let $X_1$ have a Gaussian prior, and let - ${\textbf{P}}(X_2{{\,|\,}}X_1)$ be a linear + ${\textbf{P}}(X_2$|$X_1)$ be a linear Gaussian distribution. Show that the joint distribution $P(X_1,X_2)$ is a multivariate Gaussian, and calculate its covariance matrix.
diff --git a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_14/question.md b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_14/question.md index 2358318aa4..a4cfe127d6 100644 --- a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_14/question.md +++ b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_14/question.md @@ -14,7 +14,7 @@ Figure telescop 2. Which is the best network? Explain.
3. Write out a conditional distribution for - ${\textbf{P}}(M_1{{\,|\,}}N)$, for the case where + ${\textbf{P}}(M_1$|$N)$, for the case where $N{{\,\in\\,}}\{1,2,3\}$ and $M_1{{\,\in\\,}}\{0,1,2,3,4\}$. Each entry in the conditional distribution should be expressed as a function of the parameters $e$ and/or $f$.
diff --git a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_15/question.md b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_15/question.md index af42f6e383..606670a0f6 100644 --- a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_15/question.md +++ b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_15/question.md @@ -7,7 +7,7 @@ $M_1,M_2{{\,\in\\,}}\{0,1,2,3,4\}$, with the symbolic CPTs as described in Exercise 
telescope-exercise. Using the enumeration algorithm (Figure enumeration-algorithm on page enumeration-algorithm), calculate the probability distribution -${\textbf{P}}(N{{\,|\,}}M_1{{\,=\,}}2,M_2{{\,=\,}}2)$.
+${\textbf{P}}(N$|$M_1{{\,=\,}}2,M_2{{\,=\,}}2)$.
diff --git a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_18/question.md b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_18/question.md index a4f998aebe..c30e980351 100644 --- a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_18/question.md +++ b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_18/question.md @@ -5,7 +5,7 @@ Figure exact-inference-section applies variable elimination to the query - $${\textbf{P}}({Burglary}{{\,|\,}}{JohnCalls}{{\,=\,}}{true},{MaryCalls}{{\,=\,}}{true})\ .$$ + $${\textbf{P}}({Burglary}$|${JohnCalls}{{\,=\,}}{true},{MaryCalls}{{\,=\,}}{true})\ .$$ Perform the calculations indicated and check that the answer is correct.
@@ -16,7 +16,7 @@ Figure rain-clustering-figure(a) (page rain-clustering-figure) and how Gibbs sampling can answer it.
diff --git a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_23/question.md b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_23/question.md index d06e0c65ef..6d56248ea6 100644 --- a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_23/question.md +++ b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_23/question.md @@ -3,11 +3,11 @@ The Metropolis--Hastings algorithm is a member of the MCMC family; as such, it is designed to generate samples $\textbf{x}$ (eventually) according to target probabilities $\pi(\textbf{x})$. (Typically we are interested in sampling from -$\pi(\textbf{x}){{\,=\,}}P(\textbf{x}{{\,|\,}}\textbf{e})$.) Like simulated annealing, +$\pi(\textbf{x}){{\,=\,}}P(\textbf{x}$|$\textbf{e})$.) Like simulated annealing, Metropolis–Hastings operates in two stages. First, it samples a new -state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}{{\,|\,}}\textbf{x})$, given the current state $\textbf{x}$. +state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}$|$\textbf{x})$, given the current state $\textbf{x}$. Then, it probabilistically accepts or rejects $\textbf{x'}$ according to the acceptance probability -$$\alpha(\textbf{x'}{{\,|\,}}\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}{{\,|\,}}\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}{{\,|\,}}\textbf{x})} \right)\ .$$ +$$\alpha(\textbf{x'}$|$\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}$|$\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}$|$\textbf{x})} \right)\ .$$ If the proposal is rejected, the state remains at $\textbf{x}$.
1. Consider an ordinary Gibbs sampling step for a specific variable diff --git a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_3/question.md b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_3/question.md index dafb23c321..9342825740 100644 --- a/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_3/question.md +++ b/_site/markdown/14-Probabilistic-Reasoning/exercises/ex_3/question.md @@ -3,28 +3,28 @@ Equation (parameter-joint-repn-equation on page parameter-joint-repn-equation defines the joint distribution represented by a Bayesian network in terms of the parameters -$\theta(X_i{{\,|\,}}{Parents}(X_i))$. This exercise asks you to derive +$\theta(X_i$|${Parents}(X_i))$. This exercise asks you to derive the equivalence between the parameters and the conditional probabilities -${\textbf{ P}}(X_i{{\,|\,}}{Parents}(X_i))$ from this definition.
+${\textbf{ P}}(X_i$|${Parents}(X_i))$ from this definition.
1. Consider a simple network $X\rightarrow Y\rightarrow Z$ with three Boolean variables. Use Equations (conditional-probability-equation and (marginalization-equation (pages conditional-probability-equation and marginalization-equation) - to express the conditional probability $P(z{{\,|\,}}y)$ as the ratio of two sums, each over entries in the + to express the conditional probability $P(z$|$y)$ as the ratio of two sums, each over entries in the joint distribution ${\textbf{P}}(X,Y,Z)$.
2. Now use Equation (parameter-joint-repn-equation to write this expression in terms of the network parameters - $\theta(X)$, $\theta(Y{{\,|\,}}X)$, and $\theta(Z{{\,|\,}}Y)$.
+ $\theta(X)$, $\theta(Y$|$X)$, and $\theta(Z$|$Y)$.
3. Next, expand out the summations in your expression from part (b), writing out explicitly the terms for the true and false values of each summed variable. Assuming that all network parameters satisfy the constraint - $\sum_{x_i} \theta(x_i{{\,|\,}}{parents}(X_i)){{\,=\,}}1$, show - that the resulting expression reduces to $\theta(z{{\,|\,}}y)$.
+ $\sum_{x_i} \theta(x_i$|${parents}(X_i)){{\,=\,}}1$, show + that the resulting expression reduces to $\theta(z$|$y)$.
4. Generalize this derivation to show that - $\theta(X_i{{\,|\,}}{Parents}(X_i)) = {\textbf{P}}(X_i{{\,|\,}}{Parents}(X_i))$ + $\theta(X_i$|${Parents}(X_i)) = {\textbf{P}}(X_i$|${Parents}(X_i))$ for any Bayesian network.
diff --git a/_site/markdown/Future Exercises/index.html b/_site/markdown/Future Exercises/index.html index ad68d9e834..9aafb04c1b 100644 --- a/_site/markdown/Future Exercises/index.html +++ b/_site/markdown/Future Exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/ex_1/index.html b/_site/nlp-communicating-exercises/ex_1/index.html index 03c8027d2b..8990aa7f1e 100644 --- a/_site/nlp-communicating-exercises/ex_1/index.html +++ b/_site/nlp-communicating-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/ex_10/index.html b/_site/nlp-communicating-exercises/ex_10/index.html index 3aa947e7e8..5db4c10c24 100644 --- a/_site/nlp-communicating-exercises/ex_10/index.html +++ b/_site/nlp-communicating-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/ex_11/index.html b/_site/nlp-communicating-exercises/ex_11/index.html index a3fb742745..406557d7bc 100644 --- a/_site/nlp-communicating-exercises/ex_11/index.html +++ b/_site/nlp-communicating-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/ex_2/index.html b/_site/nlp-communicating-exercises/ex_2/index.html index 248a7caf5a..ba3538ae3d 100644 --- a/_site/nlp-communicating-exercises/ex_2/index.html +++ b/_site/nlp-communicating-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/ex_3/index.html b/_site/nlp-communicating-exercises/ex_3/index.html index 7b524b6b14..fb5bb38ada 100644 --- a/_site/nlp-communicating-exercises/ex_3/index.html +++ b/_site/nlp-communicating-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/ex_4/index.html b/_site/nlp-communicating-exercises/ex_4/index.html index c33a7620c3..0274de5ebd 100644 --- a/_site/nlp-communicating-exercises/ex_4/index.html +++ b/_site/nlp-communicating-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/ex_5/index.html b/_site/nlp-communicating-exercises/ex_5/index.html index 9525b16190..66b9815033 100644 --- a/_site/nlp-communicating-exercises/ex_5/index.html +++ b/_site/nlp-communicating-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/ex_6/index.html b/_site/nlp-communicating-exercises/ex_6/index.html index 65da6fdc55..7ca3d912e6 100644 --- a/_site/nlp-communicating-exercises/ex_6/index.html +++ b/_site/nlp-communicating-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/ex_7/index.html b/_site/nlp-communicating-exercises/ex_7/index.html index 478d33b2f3..f1a99ae381 100644 --- a/_site/nlp-communicating-exercises/ex_7/index.html +++ b/_site/nlp-communicating-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/ex_8/index.html b/_site/nlp-communicating-exercises/ex_8/index.html index 6e970280e0..f1d9ed1ae4 100644 --- a/_site/nlp-communicating-exercises/ex_8/index.html +++ b/_site/nlp-communicating-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/ex_9/index.html b/_site/nlp-communicating-exercises/ex_9/index.html index 4cf18ab8a5..50400c7c7c 100644 --- a/_site/nlp-communicating-exercises/ex_9/index.html +++ b/_site/nlp-communicating-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-communicating-exercises/index.html b/_site/nlp-communicating-exercises/index.html index 3c4ff95592..c95770874b 100644 --- a/_site/nlp-communicating-exercises/index.html +++ b/_site/nlp-communicating-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_1/index.html b/_site/nlp-english-exercises/ex_1/index.html index d56410bce5..8a8e59a418 100644 --- a/_site/nlp-english-exercises/ex_1/index.html +++ b/_site/nlp-english-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_10/index.html b/_site/nlp-english-exercises/ex_10/index.html index 59c7b2cac7..df553b34ad 100644 --- a/_site/nlp-english-exercises/ex_10/index.html +++ b/_site/nlp-english-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_11/index.html b/_site/nlp-english-exercises/ex_11/index.html index 165f70e346..f8b4676e4c 100644 --- a/_site/nlp-english-exercises/ex_11/index.html +++ b/_site/nlp-english-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_12/index.html b/_site/nlp-english-exercises/ex_12/index.html index ada1762739..92963fdb94 100644 --- a/_site/nlp-english-exercises/ex_12/index.html +++ b/_site/nlp-english-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_13/index.html b/_site/nlp-english-exercises/ex_13/index.html index b13165dad3..fb207ff92d 100644 --- a/_site/nlp-english-exercises/ex_13/index.html +++ b/_site/nlp-english-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_14/index.html b/_site/nlp-english-exercises/ex_14/index.html index dca31fe7c8..dd004e1fbf 100644 --- a/_site/nlp-english-exercises/ex_14/index.html +++ b/_site/nlp-english-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_15/index.html b/_site/nlp-english-exercises/ex_15/index.html index 826aa366a7..908e27e26e 100644 --- a/_site/nlp-english-exercises/ex_15/index.html +++ b/_site/nlp-english-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_16/index.html b/_site/nlp-english-exercises/ex_16/index.html index d899dfd1ff..0aeb2de141 100644 --- a/_site/nlp-english-exercises/ex_16/index.html +++ b/_site/nlp-english-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_17/index.html b/_site/nlp-english-exercises/ex_17/index.html index 4fad60817b..ffdd723c38 100644 --- a/_site/nlp-english-exercises/ex_17/index.html +++ b/_site/nlp-english-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_18/index.html b/_site/nlp-english-exercises/ex_18/index.html index 9779f587dc..cdd650270f 100644 --- a/_site/nlp-english-exercises/ex_18/index.html +++ b/_site/nlp-english-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_19/index.html b/_site/nlp-english-exercises/ex_19/index.html index 6d099e0867..b03fe19ab1 100644 --- a/_site/nlp-english-exercises/ex_19/index.html +++ b/_site/nlp-english-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_2/index.html b/_site/nlp-english-exercises/ex_2/index.html index dff8fd1acb..11bb8e6f52 100644 --- a/_site/nlp-english-exercises/ex_2/index.html +++ b/_site/nlp-english-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_20/index.html b/_site/nlp-english-exercises/ex_20/index.html index 336164d504..e0826db4a0 100644 --- a/_site/nlp-english-exercises/ex_20/index.html +++ b/_site/nlp-english-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_21/index.html b/_site/nlp-english-exercises/ex_21/index.html index 0c8d43fb6a..7e8ac1cb5b 100644 --- a/_site/nlp-english-exercises/ex_21/index.html +++ b/_site/nlp-english-exercises/ex_21/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_22/index.html b/_site/nlp-english-exercises/ex_22/index.html index 561e4bae06..f518556b4b 100644 --- a/_site/nlp-english-exercises/ex_22/index.html +++ b/_site/nlp-english-exercises/ex_22/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_3/index.html b/_site/nlp-english-exercises/ex_3/index.html index ad32214cce..50ce352b9b 100644 --- a/_site/nlp-english-exercises/ex_3/index.html +++ b/_site/nlp-english-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_4/index.html b/_site/nlp-english-exercises/ex_4/index.html index 02244f4231..f284115ade 100644 --- a/_site/nlp-english-exercises/ex_4/index.html +++ b/_site/nlp-english-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_5/index.html b/_site/nlp-english-exercises/ex_5/index.html index 7021cdf6a3..353ef47bc0 100644 --- a/_site/nlp-english-exercises/ex_5/index.html +++ b/_site/nlp-english-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_6/index.html b/_site/nlp-english-exercises/ex_6/index.html index cd45ba04a7..939899a758 100644 --- a/_site/nlp-english-exercises/ex_6/index.html +++ b/_site/nlp-english-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_7/index.html b/_site/nlp-english-exercises/ex_7/index.html index c3b46d4bb2..a3c07ffdf7 100644 --- a/_site/nlp-english-exercises/ex_7/index.html +++ b/_site/nlp-english-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_8/index.html b/_site/nlp-english-exercises/ex_8/index.html index 16fedd3423..f33337cc96 100644 --- a/_site/nlp-english-exercises/ex_8/index.html +++ b/_site/nlp-english-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/ex_9/index.html b/_site/nlp-english-exercises/ex_9/index.html index 33e56a6f4d..30835933d2 100644 --- a/_site/nlp-english-exercises/ex_9/index.html +++ b/_site/nlp-english-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/nlp-english-exercises/index.html b/_site/nlp-english-exercises/index.html index c4244723f0..7f0418aa2c 100644 --- a/_site/nlp-english-exercises/index.html +++ b/_site/nlp-english-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/perception-exercises/ex_1/index.html b/_site/perception-exercises/ex_1/index.html index 0803c2f6ea..a0dd32dfed 100644 --- a/_site/perception-exercises/ex_1/index.html +++ b/_site/perception-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/perception-exercises/ex_2/index.html b/_site/perception-exercises/ex_2/index.html index dbfcf35ef3..629c61a35d 100644 --- a/_site/perception-exercises/ex_2/index.html +++ b/_site/perception-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/perception-exercises/ex_3/index.html b/_site/perception-exercises/ex_3/index.html index 5c2238da7e..98d3ec3980 100644 --- a/_site/perception-exercises/ex_3/index.html +++ b/_site/perception-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/perception-exercises/ex_4/index.html b/_site/perception-exercises/ex_4/index.html index 07380c5e0d..882d86f16f 100644 --- a/_site/perception-exercises/ex_4/index.html +++ b/_site/perception-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/perception-exercises/ex_5/index.html b/_site/perception-exercises/ex_5/index.html index d6731acfe3..a0619894c6 100644 --- a/_site/perception-exercises/ex_5/index.html +++ b/_site/perception-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/perception-exercises/ex_6/index.html b/_site/perception-exercises/ex_6/index.html index aa32afc764..c5d80c776f 100644 --- a/_site/perception-exercises/ex_6/index.html +++ b/_site/perception-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/perception-exercises/ex_7/index.html b/_site/perception-exercises/ex_7/index.html index c12f0c12cc..7afa08e128 100644 --- a/_site/perception-exercises/ex_7/index.html +++ b/_site/perception-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/perception-exercises/ex_8/index.html b/_site/perception-exercises/ex_8/index.html index ae775d321c..94187425ed 100644 --- a/_site/perception-exercises/ex_8/index.html +++ b/_site/perception-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/perception-exercises/index.html b/_site/perception-exercises/index.html index ff388b84c3..2c40365916 100644 --- a/_site/perception-exercises/index.html +++ b/_site/perception-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_1/index.html b/_site/philosophy-exercises/ex_1/index.html index 001b416ed3..0af420061b 100644 --- a/_site/philosophy-exercises/ex_1/index.html +++ b/_site/philosophy-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_10/index.html b/_site/philosophy-exercises/ex_10/index.html index 2190630102..76dc8e982d 100644 --- a/_site/philosophy-exercises/ex_10/index.html +++ b/_site/philosophy-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_11/index.html b/_site/philosophy-exercises/ex_11/index.html index 1c873a12e9..869517fd9b 100644 --- a/_site/philosophy-exercises/ex_11/index.html +++ b/_site/philosophy-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_12/index.html b/_site/philosophy-exercises/ex_12/index.html index 1cc7311c55..35a64ae53e 100644 --- a/_site/philosophy-exercises/ex_12/index.html +++ b/_site/philosophy-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_2/index.html b/_site/philosophy-exercises/ex_2/index.html index be31ea4ad6..4fcf8c0a95 100644 --- a/_site/philosophy-exercises/ex_2/index.html +++ b/_site/philosophy-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_3/index.html b/_site/philosophy-exercises/ex_3/index.html index 149a35812e..693fc46821 100644 --- a/_site/philosophy-exercises/ex_3/index.html +++ b/_site/philosophy-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_4/index.html b/_site/philosophy-exercises/ex_4/index.html index 7eac1ce7ec..3737b61e22 100644 --- a/_site/philosophy-exercises/ex_4/index.html +++ b/_site/philosophy-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_5/index.html b/_site/philosophy-exercises/ex_5/index.html index cdad427900..89b5759ebf 100644 --- a/_site/philosophy-exercises/ex_5/index.html +++ b/_site/philosophy-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_6/index.html b/_site/philosophy-exercises/ex_6/index.html index 76fb90c770..fc39efc448 100644 --- a/_site/philosophy-exercises/ex_6/index.html +++ b/_site/philosophy-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_7/index.html b/_site/philosophy-exercises/ex_7/index.html index 62fbdbcf3b..dd19fe0087 100644 --- a/_site/philosophy-exercises/ex_7/index.html +++ b/_site/philosophy-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_8/index.html b/_site/philosophy-exercises/ex_8/index.html index 6e6fa4c307..13d04093c0 100644 --- a/_site/philosophy-exercises/ex_8/index.html +++ b/_site/philosophy-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/ex_9/index.html b/_site/philosophy-exercises/ex_9/index.html index e184974446..e8f3d10c57 100644 --- a/_site/philosophy-exercises/ex_9/index.html +++ b/_site/philosophy-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/philosophy-exercises/index.html b/_site/philosophy-exercises/index.html index 70c46427a1..8c90951d04 100644 --- a/_site/philosophy-exercises/index.html +++ b/_site/philosophy-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_1/index.html b/_site/planning-exercises/ex_1/index.html index f40f504b0e..bf1e8e09eb 100644 --- a/_site/planning-exercises/ex_1/index.html +++ b/_site/planning-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_10/index.html b/_site/planning-exercises/ex_10/index.html index deed1d755e..e812f22288 100644 --- a/_site/planning-exercises/ex_10/index.html +++ b/_site/planning-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_11/index.html b/_site/planning-exercises/ex_11/index.html index 84015dc16a..69b01ad885 100644 --- a/_site/planning-exercises/ex_11/index.html +++ b/_site/planning-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_12/index.html b/_site/planning-exercises/ex_12/index.html index c0f825879d..a2f95d7728 100644 --- a/_site/planning-exercises/ex_12/index.html +++ b/_site/planning-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_13/index.html b/_site/planning-exercises/ex_13/index.html index 9da7f88b1a..d627bc03b6 100644 --- a/_site/planning-exercises/ex_13/index.html +++ b/_site/planning-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_14/index.html b/_site/planning-exercises/ex_14/index.html index 09d5c4c030..69c70c38b4 100644 --- a/_site/planning-exercises/ex_14/index.html +++ b/_site/planning-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_15/index.html b/_site/planning-exercises/ex_15/index.html index bbf001077b..e8acefb740 100644 --- a/_site/planning-exercises/ex_15/index.html +++ b/_site/planning-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_16/index.html b/_site/planning-exercises/ex_16/index.html index 1b249dc603..a9dcb9c9d1 100644 --- a/_site/planning-exercises/ex_16/index.html +++ b/_site/planning-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_17/index.html b/_site/planning-exercises/ex_17/index.html index 53824483de..ea42d58cac 100644 --- a/_site/planning-exercises/ex_17/index.html +++ b/_site/planning-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_18/index.html b/_site/planning-exercises/ex_18/index.html index 82ae524256..a526f56455 100644 --- a/_site/planning-exercises/ex_18/index.html +++ b/_site/planning-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_2/index.html b/_site/planning-exercises/ex_2/index.html index 0061e53a1e..fffc9b3ed4 100644 --- a/_site/planning-exercises/ex_2/index.html +++ b/_site/planning-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_3/index.html b/_site/planning-exercises/ex_3/index.html index 94c04f3fe8..f7ee42b123 100644 --- a/_site/planning-exercises/ex_3/index.html +++ b/_site/planning-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_4/index.html b/_site/planning-exercises/ex_4/index.html index c574fa0fc0..0af02db170 100644 --- a/_site/planning-exercises/ex_4/index.html +++ b/_site/planning-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_5/index.html b/_site/planning-exercises/ex_5/index.html index 000865079f..f4e27f4bf5 100644 --- a/_site/planning-exercises/ex_5/index.html +++ b/_site/planning-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_6/index.html b/_site/planning-exercises/ex_6/index.html index 2df8a6488e..430b9d886b 100644 --- a/_site/planning-exercises/ex_6/index.html +++ b/_site/planning-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_7/index.html b/_site/planning-exercises/ex_7/index.html index acb8ba6da1..216f4bf20d 100644 --- a/_site/planning-exercises/ex_7/index.html +++ b/_site/planning-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_8/index.html b/_site/planning-exercises/ex_8/index.html index a95b060048..60c0c9dc53 100644 --- a/_site/planning-exercises/ex_8/index.html +++ b/_site/planning-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/ex_9/index.html b/_site/planning-exercises/ex_9/index.html index 606721d329..7bc8ad16a7 100644 --- a/_site/planning-exercises/ex_9/index.html +++ b/_site/planning-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/planning-exercises/index.html b/_site/planning-exercises/index.html index 0656a7e773..edc437bb65 100644 --- a/_site/planning-exercises/index.html +++ b/_site/planning-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_1/index.html b/_site/probability-exercises/ex_1/index.html index a01f0e4051..3a95baa304 100644 --- a/_site/probability-exercises/ex_1/index.html +++ b/_site/probability-exercises/ex_1/index.html @@ -82,7 +82,7 @@ @@ -166,7 +166,7 @@

-Show from first principles that $P(ab\land a) = 1$. +Show from first principles that $P(a $|$ b\land a) = 1$.
@@ -187,7 +187,7 @@

-Show from first principles that $P(ab\land a) = 1$. +Show from first principles that $P(a $|$ b\land a) = 1$.

diff --git a/_site/probability-exercises/ex_10/index.html b/_site/probability-exercises/ex_10/index.html index f5071c62a3..637da9c99e 100644 --- a/_site/probability-exercises/ex_10/index.html +++ b/_site/probability-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_11/index.html b/_site/probability-exercises/ex_11/index.html index 70053a3cd2..6e09b6e077 100644 --- a/_site/probability-exercises/ex_11/index.html +++ b/_site/probability-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_12/index.html b/_site/probability-exercises/ex_12/index.html index 9a0250843b..b9e9676605 100644 --- a/_site/probability-exercises/ex_12/index.html +++ b/_site/probability-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_13/index.html b/_site/probability-exercises/ex_13/index.html index cfa265418b..b828e559dc 100644 --- a/_site/probability-exercises/ex_13/index.html +++ b/_site/probability-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_14/index.html b/_site/probability-exercises/ex_14/index.html index adf813fd76..c990db0ba8 100644 --- a/_site/probability-exercises/ex_14/index.html +++ b/_site/probability-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_15/index.html b/_site/probability-exercises/ex_15/index.html index 017b10db77..3f88c5f6d5 100644 --- a/_site/probability-exercises/ex_15/index.html +++ b/_site/probability-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_16/index.html b/_site/probability-exercises/ex_16/index.html index 6eae91d12a..e30c98098a 100644 --- a/_site/probability-exercises/ex_16/index.html +++ b/_site/probability-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_17/index.html b/_site/probability-exercises/ex_17/index.html index 61baf9336f..f63dc37a2c 100644 --- a/_site/probability-exercises/ex_17/index.html +++ b/_site/probability-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_18/index.html b/_site/probability-exercises/ex_18/index.html index 66cd8775d2..7d2d606608 100644 --- a/_site/probability-exercises/ex_18/index.html +++ b/_site/probability-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_19/index.html b/_site/probability-exercises/ex_19/index.html index 4127875dad..89693fdbcc 100644 --- a/_site/probability-exercises/ex_19/index.html +++ b/_site/probability-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_2/index.html b/_site/probability-exercises/ex_2/index.html index 97d9da64b1..1398fc5018 100644 --- a/_site/probability-exercises/ex_2/index.html +++ b/_site/probability-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_20/index.html b/_site/probability-exercises/ex_20/index.html index 887f40f326..02f69083d4 100644 --- a/_site/probability-exercises/ex_20/index.html +++ b/_site/probability-exercises/ex_20/index.html @@ -82,7 +82,7 @@ @@ -174,7 +174,7 @@

some background evidence $\textbf{e}$:
1. Prove the conditionalized version of the general product rule: - $${\textbf{P}}(X,Y \textbf{e}) = {\textbf{P}}(XY,\textbf{e}) {\textbf{P}}(Y\textbf{e})\ .$$
+ $${\textbf{P}}(X,Y $|$\textbf{e}) = {\textbf{P}}(X$|$Y,\textbf{e}) {\textbf{P}}(Y$|$\textbf{e})\ .$$
2. Prove the conditionalized version of Bayes’ rule in Equation (conditional-bayes-equation).
@@ -206,7 +206,7 @@

some background evidence $\textbf{e}$:
1. Prove the conditionalized version of the general product rule: - $${\textbf{P}}(X,Y \textbf{e}) = {\textbf{P}}(XY,\textbf{e}) {\textbf{P}}(Y\textbf{e})\ .$$
+ $${\textbf{P}}(X,Y $|$\textbf{e}) = {\textbf{P}}(X$|$Y,\textbf{e}) {\textbf{P}}(Y$|$\textbf{e})\ .$$
2. Prove the conditionalized version of Bayes’ rule in Equation (conditional-bayes-equation).
diff --git a/_site/probability-exercises/ex_21/index.html b/_site/probability-exercises/ex_21/index.html index ebb0af7b46..e304237c28 100644 --- a/_site/probability-exercises/ex_21/index.html +++ b/_site/probability-exercises/ex_21/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_22/index.html b/_site/probability-exercises/ex_22/index.html index 89baa8812f..e57da69481 100644 --- a/_site/probability-exercises/ex_22/index.html +++ b/_site/probability-exercises/ex_22/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_23/index.html b/_site/probability-exercises/ex_23/index.html index f4e1e680d0..1361c05f96 100644 --- a/_site/probability-exercises/ex_23/index.html +++ b/_site/probability-exercises/ex_23/index.html @@ -82,7 +82,7 @@ @@ -168,8 +168,8 @@

In this exercise, you will complete the normalization calculation for the meningitis example. First, make up a -suitable value for $P(s\lnot m)$, and use it to calculate -unnormalized values for $P(ms)$ and $P(\lnot m s)$ +suitable value for $P(s$|$\lnot m)$, and use it to calculate +unnormalized values for $P(m$|$s)$ and $P(\lnot m $|$s)$ (i.e., ignoring the $P(s)$ term in the Bayes’ rule expression, Equation (meningitis-bayes-equation). Now normalize these values so that they add to 1. @@ -195,8 +195,8 @@

In this exercise, you will complete the normalization calculation for the meningitis example. First, make up a -suitable value for $P(s\lnot m)$, and use it to calculate -unnormalized values for $P(ms)$ and $P(\lnot m s)$ +suitable value for $P(s$|$\lnot m)$, and use it to calculate +unnormalized values for $P(m$|$s)$ and $P(\lnot m $|$s)$ (i.e., ignoring the $P(s)$ term in the Bayes’ rule expression, Equation (meningitis-bayes-equation). Now normalize these values so that they add to 1. diff --git a/_site/probability-exercises/ex_24/index.html b/_site/probability-exercises/ex_24/index.html index 02d580b4c7..6fbb456880 100644 --- a/_site/probability-exercises/ex_24/index.html +++ b/_site/probability-exercises/ex_24/index.html @@ -82,7 +82,7 @@ @@ -170,23 +170,23 @@

relationships affect the amount of information needed for probabilistic calculations.
-1. Suppose we wish to calculate $P(he_1,e_2)$ and we have no +1. Suppose we wish to calculate $P(h$|$e_1,e_2)$ and we have no conditional independence information. Which of the following sets of numbers are sufficient for the calculation?
1. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1H)$, - ${\textbf{P}}(E_2H)$ + ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$ 2. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1,E_2H)$
+ ${\textbf{P}}(E_1,E_2$|$H)$
3. ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1H)$, - ${\textbf{P}}(E_2H)$
+ ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$
2. Suppose we know that - ${\textbf{P}}(E_1H,E_2)={\textbf{P}}(E_1H)$ + ${\textbf{P}}(E_1$|$H,E_2)={\textbf{P}}(E_1$|$H)$ for all values of $H$, $E_1$, $E_2$. Now which of the three sets are sufficient? @@ -213,23 +213,23 @@

relationships affect the amount of information needed for probabilistic calculations.
-1. Suppose we wish to calculate $P(he_1,e_2)$ and we have no +1. Suppose we wish to calculate $P(h$|$e_1,e_2)$ and we have no conditional independence information. Which of the following sets of numbers are sufficient for the calculation?
1. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1H)$, - ${\textbf{P}}(E_2H)$ + ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$ 2. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1,E_2H)$
+ ${\textbf{P}}(E_1,E_2$|$H)$
3. ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1H)$, - ${\textbf{P}}(E_2H)$
+ ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$
2. Suppose we know that - ${\textbf{P}}(E_1H,E_2)={\textbf{P}}(E_1H)$ + ${\textbf{P}}(E_1$|$H,E_2)={\textbf{P}}(E_1$|$H)$ for all values of $H$, $E_1$, $E_2$. Now which of the three sets are sufficient?

diff --git a/_site/probability-exercises/ex_25/index.html b/_site/probability-exercises/ex_25/index.html index 30a54544fa..9701e9d6e3 100644 --- a/_site/probability-exercises/ex_25/index.html +++ b/_site/probability-exercises/ex_25/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_26/index.html b/_site/probability-exercises/ex_26/index.html index 940b3a2cbd..3fff2b1a4c 100644 --- a/_site/probability-exercises/ex_26/index.html +++ b/_site/probability-exercises/ex_26/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_27/index.html b/_site/probability-exercises/ex_27/index.html index 47d474ebe7..0b3b5a88e3 100644 --- a/_site/probability-exercises/ex_27/index.html +++ b/_site/probability-exercises/ex_27/index.html @@ -82,7 +82,7 @@ @@ -167,7 +167,7 @@

Write out a general algorithm for answering queries of the form -${\textbf{P}}({Cause}\textbf{e})$, using a naive Bayes +${\textbf{P}}({Cause}$|$\textbf{e})$, using a naive Bayes distribution. Assume that the evidence $\textbf{e}$ may assign values to any subset of the effect variables.
@@ -191,7 +191,7 @@

Write out a general algorithm for answering queries of the form -${\textbf{P}}({Cause}\textbf{e})$, using a naive Bayes +${\textbf{P}}({Cause}$|$\textbf{e})$, using a naive Bayes distribution. Assume that the evidence $\textbf{e}$ may assign values to any subset of the effect variables.

diff --git a/_site/probability-exercises/ex_28/index.html b/_site/probability-exercises/ex_28/index.html index 6fe0d2377b..477a77db9a 100644 --- a/_site/probability-exercises/ex_28/index.html +++ b/_site/probability-exercises/ex_28/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_29/index.html b/_site/probability-exercises/ex_29/index.html index 9f02d41ce8..4d70e80283 100644 --- a/_site/probability-exercises/ex_29/index.html +++ b/_site/probability-exercises/ex_29/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_3/index.html b/_site/probability-exercises/ex_3/index.html index 06ab7b07ee..da9d32ed35 100644 --- a/_site/probability-exercises/ex_3/index.html +++ b/_site/probability-exercises/ex_3/index.html @@ -82,7 +82,7 @@ @@ -169,13 +169,13 @@

For each of the following statements, either prove it is true or give a counterexample.
-1. If $P(a b, c) = P(b a, c)$, then - $P(a c) = P(b c)$
+1. If $P(a $|$b, c) = P(b $|$a, c)$, then + $P(a $|$c) = P(b $|$c)$
-2. If $P(a b, c) = P(a)$, then $P(b c) = P(b)$
+2. If $P(a $|$b, c) = P(a)$, then $P(b $|$c) = P(b)$
-3. If $P(a b) = P(a)$, then - $P(a b, c) = P(a c)$
+3. If $P(a $|$b) = P(a)$, then + $P(a $|$b, c) = P(a $|$c)$
@@ -199,13 +199,13 @@

For each of the following statements, either prove it is true or give a counterexample.
-1. If $P(a b, c) = P(b a, c)$, then - $P(a c) = P(b c)$
+1. If $P(a $|$b, c) = P(b $|$a, c)$, then + $P(a $|$c) = P(b $|$c)$
-2. If $P(a b, c) = P(a)$, then $P(b c) = P(b)$
+2. If $P(a $|$b, c) = P(a)$, then $P(b $|$c) = P(b)$
-3. If $P(a b) = P(a)$, then - $P(a b, c) = P(a c)$
+3. If $P(a $|$b) = P(a)$, then + $P(a $|$b, c) = P(a $|$c)$

diff --git a/_site/probability-exercises/ex_30/index.html b/_site/probability-exercises/ex_30/index.html index 173244de35..4fa60ccefe 100644 --- a/_site/probability-exercises/ex_30/index.html +++ b/_site/probability-exercises/ex_30/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_31/index.html b/_site/probability-exercises/ex_31/index.html index 926e964371..660fb0fde8 100644 --- a/_site/probability-exercises/ex_31/index.html +++ b/_site/probability-exercises/ex_31/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_4/index.html b/_site/probability-exercises/ex_4/index.html index 41c5059271..7abe22ca31 100644 --- a/_site/probability-exercises/ex_4/index.html +++ b/_site/probability-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_5/index.html b/_site/probability-exercises/ex_5/index.html index cc8a2f7333..ba573dfe9c 100644 --- a/_site/probability-exercises/ex_5/index.html +++ b/_site/probability-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_6/index.html b/_site/probability-exercises/ex_6/index.html index 4682216441..b2a40663c8 100644 --- a/_site/probability-exercises/ex_6/index.html +++ b/_site/probability-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_7/index.html b/_site/probability-exercises/ex_7/index.html index 955227efe2..c09b2a9df7 100644 --- a/_site/probability-exercises/ex_7/index.html +++ b/_site/probability-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/probability-exercises/ex_8/index.html b/_site/probability-exercises/ex_8/index.html index 57d5de4afd..1a4386f2da 100644 --- a/_site/probability-exercises/ex_8/index.html +++ b/_site/probability-exercises/ex_8/index.html @@ -82,7 +82,7 @@ @@ -173,9 +173,9 @@

2. $\textbf{P}({Cavity})$.
-3. $\textbf{P}({Toothache}{cavity})$.
+3. $\textbf{P}({Toothache}$|${cavity})$.
-4. $\textbf{P}({Cavity}{toothache}\lor {catch})$. +4. $\textbf{P}({Cavity}$|${toothache}\lor {catch})$. @@ -203,9 +203,9 @@

2. $\textbf{P}({Cavity})$.
-3. $\textbf{P}({Toothache}{cavity})$.
+3. $\textbf{P}({Toothache}$|${cavity})$.
-4. $\textbf{P}({Cavity}{toothache}\lor {catch})$. +4. $\textbf{P}({Cavity}$|${toothache}\lor {catch})$.

diff --git a/_site/probability-exercises/ex_9/index.html b/_site/probability-exercises/ex_9/index.html index 4d52b70132..e5ce7ed275 100644 --- a/_site/probability-exercises/ex_9/index.html +++ b/_site/probability-exercises/ex_9/index.html @@ -82,7 +82,7 @@ @@ -173,9 +173,9 @@

2. $\textbf{P}({Catch})$.
-3. $\textbf{P}({Cavity}{catch})$.
+3. $\textbf{P}({Cavity}$|${catch})$.
-4. $\textbf{P}({Cavity}{toothache}\lor {catch})$.
+4. $\textbf{P}({Cavity}$|${toothache}\lor {catch})$.
@@ -203,9 +203,9 @@

2. $\textbf{P}({Catch})$.
-3. $\textbf{P}({Cavity}{catch})$.
+3. $\textbf{P}({Cavity}$|${catch})$.
-4. $\textbf{P}({Cavity}{toothache}\lor {catch})$.
+4. $\textbf{P}({Cavity}$|${toothache}\lor {catch})$.

diff --git a/_site/probability-exercises/index.html b/_site/probability-exercises/index.html index 1c41072e90..6fac0e7277 100644 --- a/_site/probability-exercises/index.html +++ b/_site/probability-exercises/index.html @@ -82,7 +82,7 @@ @@ -163,7 +163,7 @@

13. Quantifying Uncertainity

-Show from first principles that $P(ab\land a) = 1$. +Show from first principles that $P(a $|$ b\land a) = 1$.

@@ -195,13 +195,13 @@

13. Quantifying Uncertainity

For each of the following statements, either prove it is true or give a counterexample.
-1. If $P(a b, c) = P(b a, c)$, then - $P(a c) = P(b c)$
+1. If $P(a $|$b, c) = P(b $|$a, c)$, then + $P(a $|$c) = P(b $|$c)$
-2. If $P(a b, c) = P(a)$, then $P(b c) = P(b)$
+2. If $P(a $|$b, c) = P(a)$, then $P(b $|$c) = P(b)$
-3. If $P(a b) = P(a)$, then - $P(a b, c) = P(a c)$
+3. If $P(a $|$b) = P(a)$, then + $P(a $|$b, c) = P(a $|$c)$

@@ -320,9 +320,9 @@

13. Quantifying Uncertainity

2. $\textbf{P}({Cavity})$.
-3. $\textbf{P}({Toothache}{cavity})$.
+3. $\textbf{P}({Toothache}$|${cavity})$.
-4. $\textbf{P}({Cavity}{toothache}\lor {catch})$. +4. $\textbf{P}({Cavity}$|${toothache}\lor {catch})$.

@@ -343,9 +343,9 @@

13. Quantifying Uncertainity

2. $\textbf{P}({Catch})$.
-3. $\textbf{P}({Cavity}{catch})$.
+3. $\textbf{P}({Cavity}$|${catch})$.
-4. $\textbf{P}({Cavity}{toothache}\lor {catch})$.
+4. $\textbf{P}({Cavity}$|${toothache}\lor {catch})$.

@@ -617,7 +617,7 @@

13. Quantifying Uncertainity

some background evidence $\textbf{e}$:
1. Prove the conditionalized version of the general product rule: - $${\textbf{P}}(X,Y \textbf{e}) = {\textbf{P}}(XY,\textbf{e}) {\textbf{P}}(Y\textbf{e})\ .$$
+ $${\textbf{P}}(X,Y $|$\textbf{e}) = {\textbf{P}}(X$|$Y,\textbf{e}) {\textbf{P}}(Y$|$\textbf{e})\ .$$
2. Prove the conditionalized version of Bayes’ rule in Equation (conditional-bayes-equation).
@@ -683,8 +683,8 @@

13. Quantifying Uncertainity

In this exercise, you will complete the normalization calculation for the meningitis example. First, make up a -suitable value for $P(s\lnot m)$, and use it to calculate -unnormalized values for $P(ms)$ and $P(\lnot m s)$ +suitable value for $P(s$|$\lnot m)$, and use it to calculate +unnormalized values for $P(m$|$s)$ and $P(\lnot m $|$s)$ (i.e., ignoring the $P(s)$ term in the Bayes’ rule expression, Equation (meningitis-bayes-equation). Now normalize these values so that they add to 1. @@ -705,23 +705,23 @@

13. Quantifying Uncertainity

relationships affect the amount of information needed for probabilistic calculations.
-1. Suppose we wish to calculate $P(he_1,e_2)$ and we have no +1. Suppose we wish to calculate $P(h$|$e_1,e_2)$ and we have no conditional independence information. Which of the following sets of numbers are sufficient for the calculation?
1. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1H)$, - ${\textbf{P}}(E_2H)$ + ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$ 2. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1,E_2H)$
+ ${\textbf{P}}(E_1,E_2$|$H)$
3. ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1H)$, - ${\textbf{P}}(E_2H)$
+ ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$
2. Suppose we know that - ${\textbf{P}}(E_1H,E_2)={\textbf{P}}(E_1H)$ + ${\textbf{P}}(E_1$|$H,E_2)={\textbf{P}}(E_1$|$H)$ for all values of $H$, $E_1$, $E_2$. Now which of the three sets are sufficient?

@@ -781,7 +781,7 @@

13. Quantifying Uncertainity

Write out a general algorithm for answering queries of the form -${\textbf{P}}({Cause}\textbf{e})$, using a naive Bayes +${\textbf{P}}({Cause}$|$\textbf{e})$, using a naive Bayes distribution. Assume that the evidence $\textbf{e}$ may assign values to any subset of the effect variables.

diff --git a/_site/question_bank/index.html b/_site/question_bank/index.html index 63a7f781de..881fc139a2 100644 --- a/_site/question_bank/index.html +++ b/_site/question_bank/index.html @@ -82,7 +82,7 @@ @@ -8095,7 +8095,7 @@


-Show from first principles that $P(ab\land a) = 1$. +Show from first principles that $P(a $|$ b\land a) = 1$.

@@ -8127,13 +8127,13 @@


For each of the following statements, either prove it is true or give a counterexample.
-1. If $P(a b, c) = P(b a, c)$, then - $P(a c) = P(b c)$
+1. If $P(a $|$b, c) = P(b $|$a, c)$, then + $P(a $|$c) = P(b $|$c)$
-2. If $P(a b, c) = P(a)$, then $P(b c) = P(b)$
+2. If $P(a $|$b, c) = P(a)$, then $P(b $|$c) = P(b)$
-3. If $P(a b) = P(a)$, then - $P(a b, c) = P(a c)$
+3. If $P(a $|$b) = P(a)$, then + $P(a $|$b, c) = P(a $|$c)$

@@ -8252,9 +8252,9 @@


2. $\textbf{P}({Cavity})$.
-3. $\textbf{P}({Toothache}{cavity})$.
+3. $\textbf{P}({Toothache}$|${cavity})$.
-4. $\textbf{P}({Cavity}{toothache}\lor {catch})$. +4. $\textbf{P}({Cavity}$|${toothache}\lor {catch})$.

@@ -8275,9 +8275,9 @@


2. $\textbf{P}({Catch})$.
-3. $\textbf{P}({Cavity}{catch})$.
+3. $\textbf{P}({Cavity}$|${catch})$.
-4. $\textbf{P}({Cavity}{toothache}\lor {catch})$.
+4. $\textbf{P}({Cavity}$|${toothache}\lor {catch})$.

@@ -8549,7 +8549,7 @@


some background evidence $\textbf{e}$:
1. Prove the conditionalized version of the general product rule: - $${\textbf{P}}(X,Y \textbf{e}) = {\textbf{P}}(XY,\textbf{e}) {\textbf{P}}(Y\textbf{e})\ .$$
+ $${\textbf{P}}(X,Y $|$\textbf{e}) = {\textbf{P}}(X$|$Y,\textbf{e}) {\textbf{P}}(Y$|$\textbf{e})\ .$$
2. Prove the conditionalized version of Bayes’ rule in Equation (conditional-bayes-equation).
@@ -8615,8 +8615,8 @@


In this exercise, you will complete the normalization calculation for the meningitis example. First, make up a -suitable value for $P(s\lnot m)$, and use it to calculate -unnormalized values for $P(ms)$ and $P(\lnot m s)$ +suitable value for $P(s$|$\lnot m)$, and use it to calculate +unnormalized values for $P(m$|$s)$ and $P(\lnot m $|$s)$ (i.e., ignoring the $P(s)$ term in the Bayes’ rule expression, Equation (meningitis-bayes-equation). Now normalize these values so that they add to 1. @@ -8637,23 +8637,23 @@


relationships affect the amount of information needed for probabilistic calculations.
-1. Suppose we wish to calculate $P(he_1,e_2)$ and we have no +1. Suppose we wish to calculate $P(h$|$e_1,e_2)$ and we have no conditional independence information. Which of the following sets of numbers are sufficient for the calculation?
1. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1H)$, - ${\textbf{P}}(E_2H)$ + ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$ 2. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1,E_2H)$
+ ${\textbf{P}}(E_1,E_2$|$H)$
3. ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1H)$, - ${\textbf{P}}(E_2H)$
+ ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$
2. Suppose we know that - ${\textbf{P}}(E_1H,E_2)={\textbf{P}}(E_1H)$ + ${\textbf{P}}(E_1$|$H,E_2)={\textbf{P}}(E_1$|$H)$ for all values of $H$, $E_1$, $E_2$. Now which of the three sets are sufficient?

@@ -8713,7 +8713,7 @@


Write out a general algorithm for answering queries of the form -${\textbf{P}}({Cause}\textbf{e})$, using a naive Bayes +${\textbf{P}}({Cause}$|$\textbf{e})$, using a naive Bayes distribution. Assume that the evidence $\textbf{e}$ may assign values to any subset of the effect variables.

@@ -8862,30 +8862,30 @@


Equation (parameter-joint-repn-equation on page parameter-joint-repn-equation defines the joint distribution represented by a Bayesian network in terms of the parameters -$\theta(X_i{Parents}(X_i))$. This exercise asks you to derive +$\theta(X_i$|${Parents}(X_i))$. This exercise asks you to derive the equivalence between the parameters and the conditional probabilities -${\textbf{ P}}(X_i{Parents}(X_i))$ from this definition.
+${\textbf{ P}}(X_i$|${Parents}(X_i))$ from this definition.
1. Consider a simple network $X\rightarrow Y\rightarrow Z$ with three Boolean variables. Use Equations (conditional-probability-equation and (marginalization-equation (pages conditional-probability-equation and marginalization-equation) - to express the conditional probability $P(zy)$ as the ratio of two sums, each over entries in the + to express the conditional probability $P(z$|$y)$ as the ratio of two sums, each over entries in the joint distribution ${\textbf{P}}(X,Y,Z)$.
2. Now use Equation (parameter-joint-repn-equation to write this expression in terms of the network parameters - $\theta(X)$, $\theta(YX)$, and $\theta(ZY)$.
+ $\theta(X)$, $\theta(Y$|$X)$, and $\theta(Z$|$Y)$.
3. Next, expand out the summations in your expression from part (b), writing out explicitly the terms for the true and false values of each summed variable. Assuming that all network parameters satisfy the constraint - $\sum_{x_i} \theta(x_i{parents}(X_i))1$, show - that the resulting expression reduces to $\theta(zy)$.
+ $\sum_{x_i} \theta(x_i$|${parents}(X_i))1$, show + that the resulting expression reduces to $\theta(z$|$y)$.
4. Generalize this derivation to show that - $\theta(X_i{Parents}(X_i)) = {\textbf{P}}(X_i{Parents}(X_i))$ + $\theta(X_i$|${Parents}(X_i)) = {\textbf{P}}(X_i$|${Parents}(X_i))$ for any Bayesian network.

@@ -9112,7 +9112,7 @@


1. In a two-variable network, let $X_1$ be the parent of $X_2$, let $X_1$ have a Gaussian prior, and let - ${\textbf{P}}(X_2X_1)$ be a linear + ${\textbf{P}}(X_2$|$X_1)$ be a linear Gaussian distribution. Show that the joint distribution $P(X_1,X_2)$ is a multivariate Gaussian, and calculate its covariance matrix.
@@ -9211,7 +9211,7 @@


2. Which is the best network? Explain.
3. Write out a conditional distribution for - ${\textbf{P}}(M_1N)$, for the case where + ${\textbf{P}}(M_1$|$N)$, for the case where $N\{1,2,3\}$ and $M_1\{0,1,2,3,4\}$. Each entry in the conditional distribution should be expressed as a function of the parameters $e$ and/or $f$.
@@ -9244,7 +9244,7 @@


in Exercise telescope-exercise. Using the enumeration algorithm (Figure enumeration-algorithm on page enumeration-algorithm), calculate the probability distribution -${\textbf{P}}(NM_12,M_22)$.
+${\textbf{P}}(N$|$M_12,M_22)$.
@@ -9355,7 +9355,7 @@


1. Section exact-inference-section applies variable elimination to the query - $${\textbf{P}}({Burglary}{JohnCalls}{true},{MaryCalls}{true})\ .$$ + $${\textbf{P}}({Burglary}$|${JohnCalls}{true},{MaryCalls}{true})\ .$$ Perform the calculations indicated and check that the answer is correct.
@@ -9366,7 +9366,7 @@


of Boolean variables $X_1,\ldots, X_n$ where ${Parents}(X_i)\{X_{i-1}\}$ for $i2,\ldots,n$. What is the complexity of computing - ${\textbf{P}}(X_1X_n{true})$ using + ${\textbf{P}}(X_1$|$X_n{true})$ using enumeration? Using variable elimination?
4. Prove that the complexity of running variable elimination on a @@ -9449,7 +9449,7 @@


Consider the query -${\textbf{P}}({Rain}{Sprinkler}{true},{WetGrass}{true})$ +${\textbf{P}}({Rain}$|${Sprinkler}{true},{WetGrass}{true})$ in Figure rain-clustering-figure(a) (page rain-clustering-figure) and how Gibbs sampling can answer it.
@@ -9514,11 +9514,11 @@


The Metropolis--Hastings algorithm is a member of the MCMC family; as such, it is designed to generate samples $\textbf{x}$ (eventually) according to target probabilities $\pi(\textbf{x})$. (Typically we are interested in sampling from -$\pi(\textbf{x})P(\textbf{x}\textbf{e})$.) Like simulated annealing, +$\pi(\textbf{x})P(\textbf{x}$|$\textbf{e})$.) Like simulated annealing, Metropolis–Hastings operates in two stages. First, it samples a new -state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}\textbf{x})$, given the current state $\textbf{x}$. +state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}$|$\textbf{x})$, given the current state $\textbf{x}$. Then, it probabilistically accepts or rejects $\textbf{x'}$ according to the acceptance probability -$$\alpha(\textbf{x'}\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}\textbf{x})} \right)\ .$$ +$$\alpha(\textbf{x'}$|$\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}$|$\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}$|$\textbf{x})} \right)\ .$$ If the proposal is rejected, the state remains at $\textbf{x}$.
1. Consider an ordinary Gibbs sampling step for a specific variable diff --git a/_site/reinforcement-learning-exercises/ex_1/index.html b/_site/reinforcement-learning-exercises/ex_1/index.html index 770725be0e..faae72fed4 100644 --- a/_site/reinforcement-learning-exercises/ex_1/index.html +++ b/_site/reinforcement-learning-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_10/index.html b/_site/reinforcement-learning-exercises/ex_10/index.html index 021b405a80..60d93ec9dc 100644 --- a/_site/reinforcement-learning-exercises/ex_10/index.html +++ b/_site/reinforcement-learning-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_11/index.html b/_site/reinforcement-learning-exercises/ex_11/index.html index 87793228b4..9a9ce794cd 100644 --- a/_site/reinforcement-learning-exercises/ex_11/index.html +++ b/_site/reinforcement-learning-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_12/index.html b/_site/reinforcement-learning-exercises/ex_12/index.html index cd4229c509..d4d163dbd5 100644 --- a/_site/reinforcement-learning-exercises/ex_12/index.html +++ b/_site/reinforcement-learning-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_13/index.html b/_site/reinforcement-learning-exercises/ex_13/index.html index 3c06d0f2a3..b21c35db35 100644 --- a/_site/reinforcement-learning-exercises/ex_13/index.html +++ b/_site/reinforcement-learning-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_2/index.html b/_site/reinforcement-learning-exercises/ex_2/index.html index e18b5592eb..3c2d9796ba 100644 --- a/_site/reinforcement-learning-exercises/ex_2/index.html +++ b/_site/reinforcement-learning-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_3/index.html b/_site/reinforcement-learning-exercises/ex_3/index.html index fd1bcf27ed..4e232285b2 100644 --- a/_site/reinforcement-learning-exercises/ex_3/index.html +++ b/_site/reinforcement-learning-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_4/index.html b/_site/reinforcement-learning-exercises/ex_4/index.html index 4688e1aa7c..9c2fa71a0a 100644 --- a/_site/reinforcement-learning-exercises/ex_4/index.html +++ b/_site/reinforcement-learning-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_5/index.html b/_site/reinforcement-learning-exercises/ex_5/index.html index 2cd8fe1280..268fe210eb 100644 --- a/_site/reinforcement-learning-exercises/ex_5/index.html +++ b/_site/reinforcement-learning-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_6/index.html b/_site/reinforcement-learning-exercises/ex_6/index.html index 39d106cc21..311e892196 100644 --- a/_site/reinforcement-learning-exercises/ex_6/index.html +++ b/_site/reinforcement-learning-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_7/index.html b/_site/reinforcement-learning-exercises/ex_7/index.html index ee3f4a4fcd..54bf5af10b 100644 --- a/_site/reinforcement-learning-exercises/ex_7/index.html +++ b/_site/reinforcement-learning-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_8/index.html b/_site/reinforcement-learning-exercises/ex_8/index.html index 76366a1db5..a2853c9f69 100644 --- a/_site/reinforcement-learning-exercises/ex_8/index.html +++ b/_site/reinforcement-learning-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/ex_9/index.html b/_site/reinforcement-learning-exercises/ex_9/index.html index e40ceef313..7d070f054e 100644 --- a/_site/reinforcement-learning-exercises/ex_9/index.html +++ b/_site/reinforcement-learning-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/reinforcement-learning-exercises/index.html b/_site/reinforcement-learning-exercises/index.html index 20b14e4d74..566e800322 100644 --- a/_site/reinforcement-learning-exercises/index.html +++ b/_site/reinforcement-learning-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_1/index.html b/_site/robotics-exercises/ex_1/index.html index b15909151f..2ac0dc118f 100644 --- a/_site/robotics-exercises/ex_1/index.html +++ b/_site/robotics-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_10/index.html b/_site/robotics-exercises/ex_10/index.html index 454ba79cbe..43187bc67c 100644 --- a/_site/robotics-exercises/ex_10/index.html +++ b/_site/robotics-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_11/index.html b/_site/robotics-exercises/ex_11/index.html index 82fcc01178..ea5d4ca6ba 100644 --- a/_site/robotics-exercises/ex_11/index.html +++ b/_site/robotics-exercises/ex_11/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_12/index.html b/_site/robotics-exercises/ex_12/index.html index 210616e188..edb600f607 100644 --- a/_site/robotics-exercises/ex_12/index.html +++ b/_site/robotics-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_2/index.html b/_site/robotics-exercises/ex_2/index.html index 58ec6e81bf..e3a25024b1 100644 --- a/_site/robotics-exercises/ex_2/index.html +++ b/_site/robotics-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_3/index.html b/_site/robotics-exercises/ex_3/index.html index 6946a8fa42..e4f1ebcaa8 100644 --- a/_site/robotics-exercises/ex_3/index.html +++ b/_site/robotics-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_4/index.html b/_site/robotics-exercises/ex_4/index.html index 7a87acf597..d994694f44 100644 --- a/_site/robotics-exercises/ex_4/index.html +++ b/_site/robotics-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_5/index.html b/_site/robotics-exercises/ex_5/index.html index 9b8a602500..eb93457a63 100644 --- a/_site/robotics-exercises/ex_5/index.html +++ b/_site/robotics-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_6/index.html b/_site/robotics-exercises/ex_6/index.html index c06c6b4bf4..854cc5767f 100644 --- a/_site/robotics-exercises/ex_6/index.html +++ b/_site/robotics-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_7/index.html b/_site/robotics-exercises/ex_7/index.html index 923bcb347c..eaa9b36d03 100644 --- a/_site/robotics-exercises/ex_7/index.html +++ b/_site/robotics-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_8/index.html b/_site/robotics-exercises/ex_8/index.html index 4f4697ecd4..7ddebdc499 100644 --- a/_site/robotics-exercises/ex_8/index.html +++ b/_site/robotics-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/ex_9/index.html b/_site/robotics-exercises/ex_9/index.html index 4e99bdc7f8..1f82370514 100644 --- a/_site/robotics-exercises/ex_9/index.html +++ b/_site/robotics-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/robotics-exercises/index.html b/_site/robotics-exercises/index.html index 3c29c70616..da1a186496 100644 --- a/_site/robotics-exercises/index.html +++ b/_site/robotics-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_1/index.html b/_site/search-exercises/ex_1/index.html index 1fde8f702c..a79ce53f5d 100644 --- a/_site/search-exercises/ex_1/index.html +++ b/_site/search-exercises/ex_1/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_10/index.html b/_site/search-exercises/ex_10/index.html index 3927d9904a..5e4561a572 100644 --- a/_site/search-exercises/ex_10/index.html +++ b/_site/search-exercises/ex_10/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_12/index.html b/_site/search-exercises/ex_12/index.html index e8fead23de..c023fe40a3 100644 --- a/_site/search-exercises/ex_12/index.html +++ b/_site/search-exercises/ex_12/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_13/index.html b/_site/search-exercises/ex_13/index.html index 33afb22f43..dd77109523 100644 --- a/_site/search-exercises/ex_13/index.html +++ b/_site/search-exercises/ex_13/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_14/index.html b/_site/search-exercises/ex_14/index.html index a6d15c1953..6ec7a09bdd 100644 --- a/_site/search-exercises/ex_14/index.html +++ b/_site/search-exercises/ex_14/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_15/index.html b/_site/search-exercises/ex_15/index.html index 670e10f375..7cbf770319 100644 --- a/_site/search-exercises/ex_15/index.html +++ b/_site/search-exercises/ex_15/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_16/index.html b/_site/search-exercises/ex_16/index.html index 3d10c0ecaa..6eea759ea9 100644 --- a/_site/search-exercises/ex_16/index.html +++ b/_site/search-exercises/ex_16/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_17/index.html b/_site/search-exercises/ex_17/index.html index ffefd015b3..5d51e7283d 100644 --- a/_site/search-exercises/ex_17/index.html +++ b/_site/search-exercises/ex_17/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_18/index.html b/_site/search-exercises/ex_18/index.html index e95f19badb..e974ef1791 100644 --- a/_site/search-exercises/ex_18/index.html +++ b/_site/search-exercises/ex_18/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_19/index.html b/_site/search-exercises/ex_19/index.html index b12b8508d7..3806fbe7b6 100644 --- a/_site/search-exercises/ex_19/index.html +++ b/_site/search-exercises/ex_19/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_2/index.html b/_site/search-exercises/ex_2/index.html index 6223175fc0..deb85899a6 100644 --- a/_site/search-exercises/ex_2/index.html +++ b/_site/search-exercises/ex_2/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_20/index.html b/_site/search-exercises/ex_20/index.html index cb7cdffff5..464a0684ce 100644 --- a/_site/search-exercises/ex_20/index.html +++ b/_site/search-exercises/ex_20/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_21/index.html b/_site/search-exercises/ex_21/index.html index b668d923fb..4cdd17e2b0 100644 --- a/_site/search-exercises/ex_21/index.html +++ b/_site/search-exercises/ex_21/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_22/index.html b/_site/search-exercises/ex_22/index.html index c2d23285e5..2736e3d4d6 100644 --- a/_site/search-exercises/ex_22/index.html +++ b/_site/search-exercises/ex_22/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_23/index.html b/_site/search-exercises/ex_23/index.html index b85655bbe7..7552f6688c 100644 --- a/_site/search-exercises/ex_23/index.html +++ b/_site/search-exercises/ex_23/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_24/index.html b/_site/search-exercises/ex_24/index.html index e2525ef161..1831b3bcc7 100644 --- a/_site/search-exercises/ex_24/index.html +++ b/_site/search-exercises/ex_24/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_25/index.html b/_site/search-exercises/ex_25/index.html index 2941cf8fef..4c7e2c242a 100644 --- a/_site/search-exercises/ex_25/index.html +++ b/_site/search-exercises/ex_25/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_26/index.html b/_site/search-exercises/ex_26/index.html index 652c7c6c61..791749edfe 100644 --- a/_site/search-exercises/ex_26/index.html +++ b/_site/search-exercises/ex_26/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_27/index.html b/_site/search-exercises/ex_27/index.html index 5f9cfec16d..981a4191e7 100644 --- a/_site/search-exercises/ex_27/index.html +++ b/_site/search-exercises/ex_27/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_28/index.html b/_site/search-exercises/ex_28/index.html index b2358e70bd..785781f1d1 100644 --- a/_site/search-exercises/ex_28/index.html +++ b/_site/search-exercises/ex_28/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_29/index.html b/_site/search-exercises/ex_29/index.html index 5d71e47efc..18c9ec5907 100644 --- a/_site/search-exercises/ex_29/index.html +++ b/_site/search-exercises/ex_29/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_3/index.html b/_site/search-exercises/ex_3/index.html index 242f5101d0..ec3acc6e30 100644 --- a/_site/search-exercises/ex_3/index.html +++ b/_site/search-exercises/ex_3/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_30/index.html b/_site/search-exercises/ex_30/index.html index 43fb94cde1..515db02ec2 100644 --- a/_site/search-exercises/ex_30/index.html +++ b/_site/search-exercises/ex_30/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_31/index.html b/_site/search-exercises/ex_31/index.html index f45c90dbc2..956706e0a3 100644 --- a/_site/search-exercises/ex_31/index.html +++ b/_site/search-exercises/ex_31/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_32/index.html b/_site/search-exercises/ex_32/index.html index 888738c6e1..3506e0aaad 100644 --- a/_site/search-exercises/ex_32/index.html +++ b/_site/search-exercises/ex_32/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_33/index.html b/_site/search-exercises/ex_33/index.html index 8e121a0f8f..d471bb5789 100644 --- a/_site/search-exercises/ex_33/index.html +++ b/_site/search-exercises/ex_33/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_34/index.html b/_site/search-exercises/ex_34/index.html index b367d1c35c..c0f345c010 100644 --- a/_site/search-exercises/ex_34/index.html +++ b/_site/search-exercises/ex_34/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_35/index.html b/_site/search-exercises/ex_35/index.html index 8d433217e0..ea73bf0ce5 100644 --- a/_site/search-exercises/ex_35/index.html +++ b/_site/search-exercises/ex_35/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_36/index.html b/_site/search-exercises/ex_36/index.html index ed67da7474..993975c4f9 100644 --- a/_site/search-exercises/ex_36/index.html +++ b/_site/search-exercises/ex_36/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_37/index.html b/_site/search-exercises/ex_37/index.html index ba346f373c..4f581b0adf 100644 --- a/_site/search-exercises/ex_37/index.html +++ b/_site/search-exercises/ex_37/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_38/index.html b/_site/search-exercises/ex_38/index.html index ff8e366789..49cbb2dbb0 100644 --- a/_site/search-exercises/ex_38/index.html +++ b/_site/search-exercises/ex_38/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_39/index.html b/_site/search-exercises/ex_39/index.html index 83975f01c4..940d9360bf 100644 --- a/_site/search-exercises/ex_39/index.html +++ b/_site/search-exercises/ex_39/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_4/index.html b/_site/search-exercises/ex_4/index.html index fde4d0924d..50df6deac4 100644 --- a/_site/search-exercises/ex_4/index.html +++ b/_site/search-exercises/ex_4/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_40/index.html b/_site/search-exercises/ex_40/index.html index f1b04c032b..316443140f 100644 --- a/_site/search-exercises/ex_40/index.html +++ b/_site/search-exercises/ex_40/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_5/index.html b/_site/search-exercises/ex_5/index.html index 9bcfe1d043..35439aa24f 100644 --- a/_site/search-exercises/ex_5/index.html +++ b/_site/search-exercises/ex_5/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_6/index.html b/_site/search-exercises/ex_6/index.html index b93f6230f4..5218d7b0b8 100644 --- a/_site/search-exercises/ex_6/index.html +++ b/_site/search-exercises/ex_6/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_7/index.html b/_site/search-exercises/ex_7/index.html index 13e995ef21..3542f97af9 100644 --- a/_site/search-exercises/ex_7/index.html +++ b/_site/search-exercises/ex_7/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_8/index.html b/_site/search-exercises/ex_8/index.html index c6b2152572..0e9771a42b 100644 --- a/_site/search-exercises/ex_8/index.html +++ b/_site/search-exercises/ex_8/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/ex_9/index.html b/_site/search-exercises/ex_9/index.html index 02bcfa733e..0d32fbc974 100644 --- a/_site/search-exercises/ex_9/index.html +++ b/_site/search-exercises/ex_9/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search-exercises/index.html b/_site/search-exercises/index.html index 053303ea6e..1d87f56f41 100644 --- a/_site/search-exercises/index.html +++ b/_site/search-exercises/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search/index.html b/_site/search/index.html index 4d9d3ed164..ffb6fb1b00 100644 --- a/_site/search/index.html +++ b/_site/search/index.html @@ -82,7 +82,7 @@ diff --git a/_site/search_data.json b/_site/search_data.json index 572fed50fa..c76543f52a 100644 --- a/_site/search_data.json +++ b/_site/search_data.json @@ -30,7 +30,7 @@ "dbn-exercises-ex-3": { "title": "Exercise 15.3", "breadcrumb": "15-Probabilistic-Reasoning-Over-Time", - "content" : "This exercise develops a space-efficient variant ofthe forward–backward algorithm described inFigure forward-backward-algorithm (page forward-backward-algorithm).We wish to compute $$textbf{P} (textbf{X}_k|textbf{e}_{1:t})$$ for$$k=1,ldots ,t$$. This will be done with a divide-and-conquerapproach.1. Suppose, for simplicity, that $t$ is odd, and let the halfway point be $h=(t+1)/2$. Show that $$textbf{P} (textbf{X}_k|textbf{e}_{1:t}) $$ can be computed for $k=1,ldots ,h$ given just the initial forward message $$textbf{f}_{1:0}$$, the backward message $$textbf{b}_{h+1:t}$$, and the evidence $$textbf{e}_{1:h}$$.2. Show a similar result for the second half of the sequence.3. Given the results of (a) and (b), a recursive divide-and-conquer algorithm can be constructed by first running forward along the sequence and then backward from the end, storing just the required messages at the middle and the ends. Then the algorithm is called on each half. Write out the algorithm in detail.4. Compute the time and space complexity of the algorithm as a function of $t$, the length of the sequence. How does this change if we divide the input into more than two pieces?", + "content" : "This exercise develops a space-efficient variant ofthe forward–backward algorithm described inFigure forward-backward-algorithm (page forward-backward-algorithm).We wish to compute $textbf{P} (textbf{X}_k|textbf{e}_{1:t})$ for$k=1,ldots ,t$. This will be done with a divide-and-conquerapproach.1. Suppose, for simplicity, that $t$ is odd, and let the halfway point be $h=(t+1)/2$. Show that $textbf{P} (textbf{X}_k|textbf{e}_{1:t}) $ can be computed for $k=1,ldots ,h$ given just the initial forward message $textbf{f}_{1:0}$, the backward message $textbf{b}_{h+1:t}$, and the evidence $textbf{e}_{1:h}$.2. Show a similar result for the second half of the sequence.3. Given the results of (a) and (b), a recursive divide-and-conquer algorithm can be constructed by first running forward along the sequence and then backward from the end, storing just the required messages at the middle and the ends. Then the algorithm is called on each half. Write out the algorithm in detail.4. Compute the time and space complexity of the algorithm as a function of $t$, the length of the sequence. How does this change if we divide the input into more than two pieces?", "url": " /dbn-exercises/ex_3/" } @@ -75,7 +75,7 @@ "dbn-exercises-ex-12": { "title": "Exercise 15.12", "breadcrumb": "15-Probabilistic-Reasoning-Over-Time", - "content" : "Often, we wish to monitor a continuous-statesystem whose behavior switches unpredictably among a set of $k$ distinct“modes.” For example, an aircraft trying to evade a missile can executea series of distinct maneuvers that the missile may attempt to track. ABayesian network representation of such a switching Kalmanfilter model is shown inFigure switching-kf-figure.1. Suppose that the discrete state $S_t$ has $k$ possible values and that the prior continuous state estimate $${textbf{P}}(textbf{X}_0)$$ is a multivariate Gaussian distribution. Show that the prediction $${textbf{P}}(textbf{X}_1)$$ is a mixture of Gaussians—that is, a weighted sum of Gaussians such that the weights sum to 1.2. Show that if the current continuous state estimate $${textbf{P}}(textbf{X}_t|textbf{e}_{1:t})$$ is a mixture of $m$ Gaussians, then in the general case the updated state estimate $${textbf{P}}(textbf{X}_{t+1}|textbf{e}_{1:t+1})$$ will be a mixture of $km$ Gaussians.3. What aspect of the temporal process do the weights in the Gaussian mixture represent?The results in (a) and (b) show that the representation of the posteriorgrows without limit even for switching Kalman filters, which are amongthe simplest hybrid dynamic models.", + "content" : "Often, we wish to monitor a continuous-statesystem whose behavior switches unpredictably among a set of $k$ distinct“modes.” For example, an aircraft trying to evade a missile can executea series of distinct maneuvers that the missile may attempt to track. ABayesian network representation of such a switching Kalmanfilter model is shown inFigure switching-kf-figure.1. Suppose that the discrete state $S_t$ has $k$ possible values and that the prior continuous state estimate ${textbf{P}}(textbf{X}_0)$ is a multivariate Gaussian distribution. Show that the prediction ${textbf{P}}(textbf{X}_1)$ is a mixture of Gaussians—that is, a weighted sum of Gaussians such that the weights sum to 1.2. Show that if the current continuous state estimate ${textbf{P}}(textbf{X}_t|textbf{e}_{1:t})$ is a mixture of $m$ Gaussians, then in the general case the updated state estimate ${textbf{P}}(textbf{X}_{t+1}|textbf{e}_{1:t+1})$ will be a mixture of $km$ Gaussians.3. What aspect of the temporal process do the weights in the Gaussian mixture represent?The results in (a) and (b) show that the representation of the posteriorgrows without limit even for switching Kalman filters, which are amongthe simplest hybrid dynamic models.", "url": " /dbn-exercises/ex_12/" } @@ -111,7 +111,7 @@ "dbn-exercises-ex-7": { "title": "Exercise 15.7", "breadcrumb": "15-Probabilistic-Reasoning-Over-Time", - "content" : "In Section hmm-localization-section, the priordistribution over locations is uniform and the transition model assumesan equal probability of moving to any neighboring square. What if thoseassumptions are wrong? Suppose that the initial location is actuallychosen uniformly from the northwest quadrant of the room and the actionactually tends to move southeast[hmm-robot-southeast-page]. Keepingthe HMM model fixed, explore the effect on localization and pathaccuracy as the southeasterly tendency increases, for different valuesof $epsilon$.", + "content" : "In Section hmm-localization-section, the priordistribution over locations is uniform and the transition model assumesan equal probability of moving to any neighboring square. What if thoseassumptions are wrong? Suppose that the initial location is actuallychosen uniformly from the northwest quadrant of the room and the actionactually tends to move southeast. Keepingthe HMM model fixed, explore the effect on localization and pathaccuracy as the southeasterly tendency increases, for different valuesof $epsilon$.", "url": " /dbn-exercises/ex_7/" } @@ -203,7 +203,7 @@ "philosophy-exercises-ex-12": { "title": "Exercise 26.12", "breadcrumb": "26-Philosophical-Foundations", - "content" : "Some critics object that AI is impossible, while others object that itis *too* possible and that ultraintelligent machines pose athreat. Which of these objections do you think is more likely? Would itbe a contradiction for someone to hold both positions?", + "content" : "Some critics object that AI is impossible, while others object that itis too possible and that ultraintelligent machines pose athreat. Which of these objections do you think is more likely? Would itbe a contradiction for someone to hold both positions?", "url": " /philosophy-exercises/ex_12/" } @@ -268,7 +268,7 @@ "concept-learning-exercises-ex-16": { "title": "Exercise 18.16", "breadcrumb": "18-Learning-From-Examples", - "content" : "Construct a decision list to classify the data below.Select tests to be as small as possible (in terms of attributes),breaking ties among tests with the same number of attributes byselecting the one that classifies the greatest number of examplescorrectly. If multiple tests have the same number of attributes andclassify the same number of examples, then break the tie usingattributes with lower index numbers (e.g., select $A_1$ over $A_2$).| | $quad A_1quad$ | $quad A_2quad$ | $quad A_3quad$ | $quad A_yquad$ | $quad yquad$ || --- | --- | --- | --- | --- | --- || $textbf{x}_1$ | 1 | 0 | 0 | 0 | 1 || $textbf{x}_2$ | 1 | 0 | 1 | 1 | 1 || $textbf{x}_3$ | 0 | 1 | 0 | 0 | 1 || $textbf{x}_4$ | 0 | 1 | 1 | 0 | 0 || $textbf{x}_5$ | 1 | 1 | 0 | 1 | 1 || $textbf{x}_6$ | 0 | 1 | 0 | 1 | 0 || $textbf{x}_7$ | 0 | 0 | 1 | 1 | 1 || $textbf{x}_8$ | 0 | 0 | 1 | 0 | 0 |", + "content" : "Construct a decision list to classify the data below.Select tests to be as small as possible (in terms of attributes),breaking ties among tests with the same number of attributes byselecting the one that classifies the greatest number of examplescorrectly. If multiple tests have the same number of attributes andclassify the same number of examples, then break the tie usingattributes with lower index numbers (e.g., select $A_1$ over $A_2$).$$begin{array} {|r|r|}hline textbf{Example} &amp; A_1 &amp; A_2 &amp; A_3 &amp; A_4 &amp; y hline textbf{x}_1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 1 textbf{x}_2 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 textbf{x}_3 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 textbf{x}_4 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 textbf{x}_5 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 1 textbf{x}_6 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 textbf{x}_7 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 1 textbf{x}_8 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 hline end{array}$$", "url": " /concept-learning-exercises/ex_16/" } @@ -295,7 +295,7 @@ "concept-learning-exercises-ex-27": { "title": "Exercise 18.27", "breadcrumb": "18-Learning-From-Examples", - "content" : "Consider the following set of examples, each with six inputs and onetarget output:| | | | | | | | | | | | | | | || --- | --- | --- | --- | --- | --- || $textbf{x}_1$ | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 || $textbf{x}_2$ | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 || $textbf{x}_3$ | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 || $textbf{x}_4$ | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 || $textbf{x}_5$ | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 || $textbf{x}_6$ | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 || $textbf{T}$ | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |1. Run the perceptron learning rule on these data and show the final weights.2. Run the decision tree learning rule, and show the resulting decision tree.3. Comment on your results.", + "content" : "Consider the following set of examples, each with six inputs and onetarget output:$$begin{array} {|r|r|}hline textbf{Example} &amp; A_1 &amp; A_2 &amp; A_3 &amp; A_4 &amp; A_5 &amp; A_6 &amp; A_7 &amp; A_8 &amp; A_9 &amp; A_{10} &amp; A_{11} &amp; A_{12} &amp; A_{13} &amp; A_{14} hline textbf{x}_1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 textbf{x}_2 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 1 textbf{x}_3 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 1 textbf{x}_4 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 1 textbf{x}_5 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 textbf{x}_6 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 &amp; 0 textbf{T} &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 hline end{array}$$1. Run the perceptron learning rule on these data and show the final weights.2. Run the decision tree learning rule, and show the resulting decision tree.3. Comment on your results.", "url": " /concept-learning-exercises/ex_27/" } @@ -331,7 +331,7 @@ "concept-learning-exercises-ex-21": { "title": "Exercise 18.21", "breadcrumb": "18-Learning-From-Examples", - "content" : "Figure &lt;ahref=""#"&gt;kernel-machine-figure&lt;/a&gt;showed how a circle at the origin can be linearly separated by mappingfrom the features $(x_1, x_2)$ to the two dimensions $(x_1^2, x_2^2)$.But what if the circle is not located at the origin? What if it is anellipse, not a circle? The general equation for a circle (and hence thedecision boundary) is $(x_1-a)^2 +(x_2-b)^2 - r^20$, and the general equation for an ellipse is$c(x_1-a)^2 + d(x_2-b)^2 - 1 0$.1. Expand out the equation for the circle and show what the weights $w_i$ would be for the decision boundary in the four-dimensional feature space $(x_1, x_2, x_1^2, x_2^2)$. Explain why this means that any circle is linearly separable in this space.2. Do the same for ellipses in the five-dimensional feature space $(x_1, x_2, x_1^2, x_2^2, x_1 x_2)$.", + "content" : "Figure kernel-machine-figureshowed how a circle at the origin can be linearly separated by mappingfrom the features $(x_1, x_2)$ to the two dimensions $(x_1^2, x_2^2)$.But what if the circle is not located at the origin? What if it is anellipse, not a circle? The general equation for a circle (and hence thedecision boundary) is $(x_1-a)^2 +(x_2-b)^2 - r^20$, and the general equation for an ellipse is$c(x_1-a)^2 + d(x_2-b)^2 - 1 0$.1. Expand out the equation for the circle and show what the weights $w_i$ would be for the decision boundary in the four-dimensional feature space $(x_1, x_2, x_1^2, x_2^2)$. Explain why this means that any circle is linearly separable in this space.2. Do the same for ellipses in the five-dimensional feature space $(x_1, x_2, x_1^2, x_2^2, x_1 x_2)$.", "url": " /concept-learning-exercises/ex_21/" } @@ -502,7 +502,7 @@ "concept-learning-exercises-ex-7": { "title": "Exercise 18.7", "breadcrumb": "18-Learning-From-Examples", - "content" : "[nonnegative-gain-exercise]Suppose that an attribute splits the set ofexamples $E$ into subsets $E_k$ and that each subset has $p_k$positive examples and $n_k$ negative examples. Show that theattribute has strictly positive information gain unless the ratio$p_k/(p_k+n_k)$ is the same for all $k$.", + "content" : "Suppose that an attribute splits the set ofexamples $E$ into subsets $E_k$ and that each subset has $p_k$positive examples and $n_k$ negative examples. Show that theattribute has strictly positive information gain unless the ratio$p_k/(p_k+n_k)$ is the same for all $k$.", "url": " /concept-learning-exercises/ex_7/" } @@ -538,7 +538,7 @@ "concept-learning-exercises-ex-8": { "title": "Exercise 18.8", "breadcrumb": "18-Learning-From-Examples", - "content" : "Consider the following data set comprised of three binary inputattributes ($A_1, A_2$, and $A_3$) and one binary output:| $quad textbf{Example}$ | $quad A_1quad$ | $quad A_2quad$ | $quad A_3quad$ | $quad Outputspace y$ || --- | --- | --- | --- | --- || $textbf{x}_1$ | 1 | 0 | 0 | 0 || $textbf{x}_2$ | 1 | 0 | 1 | 0 || $textbf{x}_3$ | 0 | 1 | 0 | 0 || $textbf{x}_4$ | 1 | 1 | 1 | 1 || $textbf{x}_5$ | 1 | 1 | 0 | 1 |Use the algorithm in Figure DTL-algorithm(page DTL-algorithm) to learn a decision tree for these data. Show thecomputations made to determine the attribute to split at each node.", + "content" : "Consider the following data set comprised of three binary inputattributes ($A_1, A_2$, and $A_3$) and one binary output:$$begin{array} {|r|r|}hline textbf{Example} &amp; A_1 &amp; A_2 &amp; A_3 &amp; Outputspace y hline textbf{x}_1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 textbf{x}_2 &amp; 1 &amp; 0 &amp; 1 &amp; 0 textbf{x}_3 &amp; 0 &amp; 1 &amp; 0 &amp; 0 textbf{x}_4 &amp; 1 &amp; 1 &amp; 1 &amp; 1 textbf{x}_5 &amp; 1 &amp; 1 &amp; 0 &amp; 1 hline end{array}$$Use the algorithm in Figure DTL-algorithm(page DTL-algorithm) to learn a decision tree for these data. Show thecomputations made to determine the attribute to split at each node.", "url": " /concept-learning-exercises/ex_8/" } @@ -695,7 +695,7 @@ "nlp-english-exercises-ex-10": { "title": "Exercise 23.10", "breadcrumb": "23-Natural-Language-For-Communication", - "content" : "In this exercise you will transform $large varepsilon_0$ intoChomsky Normal Form (CNF). There are five steps: (a) Add a new startsymbol, (b) Eliminate $epsilon$ rules, (c) Eliminate multiple words onright-hand sides, (d) Eliminate rules of the form(${it X}$$rightarrow$${it Y}$),(e) Convert long right-hand sides into binary rules.1. The start symbol, $S$, can occur only on the left-hand side in CNF. Replace ${it S}$ everywhere by a new symbol ${it S'}$ and add a rule of the form ${it S}$ $rightarrow$${it S'}$.2. The empty string, $epsilon$ cannot appear on the right-hand side in CNF. $large varepsilon_0$ does not have any rules with $epsilon$, so this is not an issue.3. A word can appear on the right-hand side in a rule only of the form (${it X}$ $rightarrow$*word*). Replace each rule of the form (${it X}$ $rightarrow$…*word* …) with (${it X}$ $rightarrow$…${it W'}$ …) and (${it W'}$ $rightarrow$*word*), using a new symbol ${it W'}$.4. A rule (${it X}$ $rightarrow$${it Y}$) is not allowed in CNF; it must be (${it X}$ $rightarrow$${it Y}$ ${it Z}$) or (${it X}$ $rightarrow$*word*). Replace each rule of the form (${it X}$ $rightarrow$${it Y}$) with a set of rules of the form (${it X}$ $rightarrow$…), one for each rule (${it Y}$ $rightarrow$…), where (…) indicates one or more symbols.5. Replace each rule of the form (${it X}$ $rightarrow$${it Y}$ ${it Z}$ …) with two rules, (${it X}$ $rightarrow$${it Y}$ ${it Z'}$) and (${it Z'}$ $rightarrow$${it Z}$ …), where ${it Z'}$ is a new symbol.Show each step of the process and the final set of rules.", + "content" : "In this exercise you will transform $large varepsilon_0$ intoChomsky Normal Form (CNF). There are five steps: (a) Add a new startsymbol, (b) Eliminate $epsilon$ rules, (c) Eliminate multiple words onright-hand sides, (d) Eliminate rules of the form(${it X} rightarrow$${it Y}$),(e) Convert long right-hand sides into binary rules.1. The start symbol, $S$, can occur only on the left-hand side in CNF. Replace ${it S}$ everywhere by a new symbol ${it S'}$ and add a rule of the form ${it S}$ $rightarrow$${it S'}$.2. The empty string, $epsilon$ cannot appear on the right-hand side in CNF. $large varepsilon_0$ does not have any rules with $epsilon$, so this is not an issue.3. A word can appear on the right-hand side in a rule only of the form (${it X}$ $rightarrow$word). Replace each rule of the form (${it X}$ $rightarrow$…word …) with (${it X}$ $rightarrow$…${it W'}$ …) and (${it W'}$ $rightarrow$word), using a new symbol ${it W'}$.4. A rule (${it X}$ $rightarrow$${it Y}$) is not allowed in CNF; it must be (${it X}$ $rightarrow$${it Y}$ ${it Z}$) or (${it X}$ $rightarrow$word). Replace each rule of the form (${it X}$ $rightarrow$${it Y}$) with a set of rules of the form (${it X}$ $rightarrow$…), one for each rule (${it Y}$ $rightarrow$…), where (…) indicates one or more symbols.5. Replace each rule of the form (${it X}$ $rightarrow$${it Y}$ ${it Z}$ …) with two rules, (${it X}$ $rightarrow$${it Y}$ ${it Z'}$) and (${it Z'}$ $rightarrow$${it Z}$ …), where ${it Z'}$ is a new symbol.Show each step of the process and the final set of rules.", "url": " /nlp-english-exercises/ex_10/" } @@ -776,7 +776,7 @@ "nlp-english-exercises-ex-14": { "title": "Exercise 23.14", "breadcrumb": "23-Natural-Language-For-Communication", - "content" : "An augmented context-free grammar can represent languages that a regularcontext-free grammar cannot. Show an augmented context-free grammar forthe language $a^nb^nc^n$. The allowable values for augmentationvariables are 1 and $SUCCESSOR(n)$, where $n$ is a value. The rule for a sentencein this language is$$S(n) rightarrow}}A(n) B(n) C(n) .$$Show the rule(s) for each of ${it A}$,${it B}$, and ${it C}$.", + "content" : "An augmented context-free grammar can represent languages that a regularcontext-free grammar cannot. Show an augmented context-free grammar forthe language $a^nb^nc^n$. The allowable values for augmentationvariables are 1 and $SUCCESSOR(n)$, where $n$ is a value. The rule for a sentencein this language is$$S(n) rightarrow A(n) B(n) C(n) .$$Show the rule(s) for each of ${it A}$,${it B}$, and ${it C}$.", "url": " /nlp-english-exercises/ex_14/" } @@ -794,7 +794,7 @@ "nlp-english-exercises-ex-7": { "title": "Exercise 23.7", "breadcrumb": "23-Natural-Language-For-Communication", - "content" : "Consider the sentence “Someone walked slowly to the supermarket” and alexicon consisting of the following words:$Pronoun rightarrow textbf{someone} quad Verb rightarrow textbf{walked}$$Adv rightarrow textbf{slowly} quad Prep rightarrow textbf{to}$$Article rightarrow textbf{the} quad Noun rightarrow textbf{supermarket}$Which of the following three grammars, combined with the lexicon,generates the given sentence? Show the corresponding parse tree(s).| $quadquadquadquad (A):quadquadquadquad$ | $quadquadquadquad(B):quadquadquadquad$ | $quadquadquadquad(C):quadquadquadquad$ || --- | --- | --- || $Srightarrow NPspace VP$ | $Srightarrow NPspace VP$ | $Srightarrow NPspace VP$ || $NPrightarrow Pronoun$ | $NPrightarrow Pronoun$ | $NPrightarrow Pronoun$ || $NPrightarrow Articlespace Noun $ | $NPrightarrow Noun$ | $NPrightarrow Articlespace NP$ || $VPrightarrow VPspace PP$ | $NPrightarrow Articlespace NP$ | $VPrightarrow Verbspace Adv$ || $VPrightarrow VPspace Advspace Adv$ | $VPrightarrow Verbspace Vmod$ | $Advrightarrow Advspace Adv$ || $VPrightarrow Verb$ | $Vmodrightarrow Advspace Vmod$ | $Advrightarrow PP$ || $PPrightarrow Prepspace NP$ | $Vmodrightarrow Adv$ | $PPrightarrow Prepspace NP$ || $NPrightarrow Noun$ | $Advrightarrow PP$ | $NPrightarrow Noun$ || $quad$ | $PPrightarrow Prepspace NP$ | $quad$ |For each of the preceding three grammars, write down three sentences ofEnglish and three sentences of non-English generated by the grammar.Each sentence should be significantly different, should be at least sixwords long, and should include some new lexical entries (which youshould define). Suggest ways to improve each grammar to avoid generatingthe non-English sentences.", + "content" : "Consider the sentence “Someone walked slowly to the supermarket” and alexicon consisting of the following words:$Pronoun rightarrow textbf{someone} quad Verb rightarrow textbf{walked}$$Adv rightarrow textbf{slowly} quad Prep rightarrow textbf{to}$$Article rightarrow textbf{the} quad Noun rightarrow textbf{supermarket}$Which of the following three grammars, combined with the lexicon,generates the given sentence? Show the corresponding parse tree(s).$$quadquadquadquad (A):quadquadquadquad quadquadquadquad(B):quadquadquadquad quadquadquadquad(C):quadquadquadquad S rightarrow NP space VP quadquadquadquad quadquadquadquad Srightarrow NPspace VP quadquadquadquad Srightarrow NPspace VPquadquadquadquad NPrightarrow Pronoun quadquadquadquad NPrightarrow Pronoun quadquadquadquad NPrightarrow Pronounquadquadquadquad NPrightarrow Articlespace Noun quadquadquadquad NPrightarrow Noun quadquadquadquad NPrightarrow Articlespace NPquadquadquadquad VPrightarrow VPspace PP quadquadquadquad NPrightarrow Articlespace NP quadquadquadquad VPrightarrow Verbspace Advquadquadquadquad VPrightarrow VPspace Advspace Adv quadquadquadquad VPrightarrow Verbspace Vmod quadquadquadquad Advrightarrow Advspace Advquadquadquadquad VPrightarrow Verb quadquadquadquad Vmodrightarrow Advspace Vmod quadquadquadquad Advrightarrow PPquadquadquadquad PPrightarrow Prepspace NP quadquadquadquad Vmodrightarrow Adv quadquadquadquad PPrightarrow Prepspace NPquadquadquadquad NPrightarrow Noun quadquadquadquad Advrightarrow PP quadquadquadquad NPrightarrow Nounquadquadquadquadquad quadquadquadquad PPrightarrow Prepspace NP quadquadquadquad quadquadquadquad$$For each of the preceding three grammars, write down three sentences ofEnglish and three sentences of non-English generated by the grammar.Each sentence should be significantly different, should be at least sixwords long, and should include some new lexical entries (which youshould define). Suggest ways to improve each grammar to avoid generatingthe non-English sentences.", "url": " /nlp-english-exercises/ex_7/" } @@ -904,7 +904,7 @@ "probability-exercises-ex-21": { "title": "Exercise 13.21", "breadcrumb": "13-Quantifying-Uncertainity", - "content" : "Show that the statement of conditional independence$${textbf{P}}(X,Y Z) = {textbf{P}}(XZ) {textbf{P}}(YZ)$$is equivalent to each of the statements$${textbf{P}}(XY,Z) = {textbf{P}}(XZ) quadmbox{and}quad {textbf{P}}(YX,Z) = {textbf{P}}(YZ) .$$", + "content" : "Show that the statement of conditional independence$${textbf{P}}(X,Y | Z) = {textbf{P}}(X | Z) {textbf{P}}(Y | Z)$$is equivalent to each of the statements$${textbf{P}}(X | Y,Z) = {textbf{P}}(X | Z) quadmbox{and}quad {textbf{P}}(Y | X,Z) = {textbf{P}}(Y | Z) .$$", "url": " /probability-exercises/ex_21/" } @@ -949,7 +949,7 @@ "probability-exercises-ex-4": { "title": "Exercise 13.4", "breadcrumb": "13-Quantifying-Uncertainity", - "content" : "Would it be rational for an agent to hold the three beliefs$P(A) {0.4}$, $P(B) {0.3}$, and$P(A lor B) {0.5}$? If so, what range of probabilities wouldbe rational for the agent to hold for $A land B$? Make up a table likethe one in Figure de-finetti-table, and show how itsupports your argument about rationality. Then draw another version ofthe table where $P(A lor B){0.7}$. Explain why it is rational to have this probability,even though the table shows one case that is a loss and three that justbreak even. (Hint: what is Agent 1 committed to about theprobability of each of the four cases, especially the case that is aloss?)", + "content" : "Would it be rational for an agent to hold the three beliefs$P(A) = 0.4$, $P(B) = 0.3$, and$P(A lor B) = 0.5$? If so, what range of probabilities wouldbe rational for the agent to hold for $A land B$? Make up a table likethe one in Figure de-finetti-table, and show how itsupports your argument about rationality. Then draw another version ofthe table where $P(A lor B)= 0.7$. Explain why it is rational to have this probability,even though the table shows one case that is a loss and three that justbreak even. (Hint: what is Agent 1 committed to about theprobability of each of the four cases, especially the case that is aloss?)", "url": " /probability-exercises/ex_4/" } @@ -1030,7 +1030,7 @@ "probability-exercises-ex-13": { "title": "Exercise 13.13", "breadcrumb": "13-Quantifying-Uncertainity", - "content" : "We wish to transmit an $n$-bit message to a receiving agent. The bits inthe message are independently corrupted (flipped) during transmissionwith $epsilon$ probability each. With an extra parity bit sent alongwith the original information, a message can be corrected by thereceiver if at most one bit in the entire message (including the paritybit) has been corrupted. Suppose we want to ensure that the correctmessage is received with probability at least $1-delta$. What is themaximum feasible value of $n$? Calculate this value for the case$epsilon0.001$, $delta0.01$.", + "content" : "We wish to transmit an $n$-bit message to a receiving agent. The bits inthe message are independently corrupted (flipped) during transmissionwith $epsilon$ probability each. With an extra parity bit sent alongwith the original information, a message can be corrected by thereceiver if at most one bit in the entire message (including the paritybit) has been corrupted. Suppose we want to ensure that the correctmessage is received with probability at least $1-delta$. What is themaximum feasible value of $n$? Calculate this value for the case$epsilon = 0.001$, $delta = 0.01$.", "url": " /probability-exercises/ex_13/" } @@ -1529,7 +1529,7 @@ "complex-decisions-exercises-ex-22": { "title": "Exercise 17.22", "breadcrumb": "17-Making-Complex-Decision", - "content" : "The following payoff matrix, from @Blinder:1983 by way of Bernstein:1996, shows a game betweenpoliticians and the Federal Reserve.| | Fed: contract | Fed: do nothing | Fed: expand || --- | --- | --- | --- || **Pol: contract** | $F=7, P=1$ | $F=9,P=4$ | $F=6,P=6$ || **Pol: do nothing** | $F=8, P=2$ | $F=5,P=5$ | $F=4,P=9$ || **Pol: expand** | $F=3, P=3$ | $F=2,P=7$ | $F=1,P=8$ |Politicians can expand or contract fiscal policy, while the Fed canexpand or contract monetary policy. (And of course either side canchoose to do nothing.) Each side also has preferences for who should dowhat—neither side wants to look like the bad guys. The payoffs shown aresimply the rank orderings: 9 for first choice through 1 for last choice.Find the Nash equilibrium of the game in pure strategies. Is this aPareto-optimal solution? You might wish to analyze the policies ofrecent administrations in this light.", + "content" : "The following payoff matrix, from @Blinder:1983 by way of Bernstein:1996, shows a game betweenpoliticians and the Federal Reserve.$$begin{array} {|r|r|}hline &amp; Fed: contract &amp; Fed: do nothing &amp; Fed: expand hline Pol: contract &amp; F=7, P=1 &amp; F=9, P=4 &amp; F=6, P=6 Pol: do nothing &amp; F=8, P=2 &amp; F=5, P=5 &amp; F=4, P=9 Pol: expand &amp; F=3, P=3 &amp; F=2, P=7 &amp; F=1, P=8 hline end{array}$$Politicians can expand or contract fiscal policy, while the Fed canexpand or contract monetary policy. (And of course either side canchoose to do nothing.) Each side also has preferences for who should dowhat—neither side wants to look like the bad guys. The payoffs shown aresimply the rank orderings: 9 for first choice through 1 for last choice.Find the Nash equilibrium of the game in pure strategies. Is this aPareto-optimal solution? You might wish to analyze the policies ofrecent administrations in this light.", "url": " /complex-decisions-exercises/ex_22/" } @@ -1720,7 +1720,7 @@ "planning-exercises-ex-14": { "title": "Exercise 10.14", "breadcrumb": "10-Classical-Planning", - "content" : "Examine the definition of **bidirectionalsearch** in Chapter search-chapter.1. Would bidirectional state-space search be a good idea for planning?2. What about bidirectional search in the space of partial-order plans?3. Devise a version of partial-order planning in which an action can be added to a plan if its preconditions can be achieved by the effects of actions already in the plan. Explain how to deal with conflicts and ordering constraints. Is the algorithm essentially identical to forward state-space search?", + "content" : "Examine the definition of bidirectional search in Chapter search-chapter.1. Would bidirectional state-space search be a good idea for planning?2. What about bidirectional search in the space of partial-order plans?3. Devise a version of partial-order planning in which an action can be added to a plan if its preconditions can be achieved by the effects of actions already in the plan. Explain how to deal with conflicts and ordering constraints. Is the algorithm essentially identical to forward state-space search?", "url": " /planning-exercises/ex_14/" } @@ -1857,7 +1857,7 @@ "robotics-exercises-ex-1": { "title": "Exercise 25.1", "breadcrumb": "25-Robotics", - "content" : "Monte Carlo localization isbiased for any finite sample size—i.e., the expectedvalue of the location computed by the algorithm differs from the trueexpected value—because of the way particle filtering works. In thisquestion, you are asked to quantify this bias.To simplify, consider a world with four possible robot locations:$X={x_,x_,x_,x_}$. Initially, wedraw $Ngeq $ samples uniformly from among those locations. Asusual, it is perfectly acceptable if more than one sample is generatedfor any of the locations $X$. Let $Z$ be a Boolean sensor variablecharacterized by the following conditional probabilities:$$begin{aligned}P(zmid x_) &amp;=&amp; } qquadqquad P(lnot zmid x_);;=;;} P(zmid x_) &amp;=&amp; } qquadqquad P(lnot zmid x_);;=;;} P(zmid x_) &amp;=&amp; } qquadqquad P(lnot zmid x_);;=;;} P(zmid x_) &amp;=&amp; } qquadqquad P(lnot zmid x_);;=;;} .end{aligned}$$begin{table}[]begin{tabular}{ll}P(ztextbackslash{}mid x_{{textbackslash{}rm 1}}) &amp;=&amp; {{textbackslash{}rm {0.8}}} &amp; 1 1 &amp; 1 1 &amp; 1 1 &amp; 1end{tabular}end{table}MCL uses these probabilities to generate particle weights, which aresubsequently normalized and used in the resampling process. Forsimplicity, let us assume we generate only one new sample in theresampling process, regardless of $N$. This sample might correspond toany of the four locations in $X$. Thus, the sampling process defines aprobability distribution over $X$.1. What is the resulting probability distribution over $X$ for this new sample? Answer this question separately for $N=,ldots,}$, and for $N=infty$.2. The difference between two probability distributions $P$ and $Q$ can be measured by the KL divergence, which is defined as $${KL}(P,Q) = sum_i P(x_i)logfrac{P(x_i)}{Q(x_i)} .$$ What are the KL divergences between the distributions in (a) and the true posterior?3. What modification of the problem formulation (not the algorithm!) would guarantee that the specific estimator above is unbiased even for finite values of $N$? Provide at least two such modifications (each of which should be sufficient).", + "content" : "Monte Carlo localization isbiased for any finite sample size—i.e., the expectedvalue of the location computed by the algorithm differs from the trueexpected value—because of the way particle filtering works. In thisquestion, you are asked to quantify this bias.To simplify, consider a world with four possible robot locations:$X={x_1,x_2,x_3,x_4}$. Initially, wedraw $Ngeq $ samples uniformly from among those locations. Asusual, it is perfectly acceptable if more than one sample is generatedfor any of the locations $X$. Let $Z$ be a Boolean sensor variablecharacterized by the following conditional probabilities:$$begin{aligned}P(z | x_1) = 0.8 qquadqquad P(z | x_1) = 0.2 P(z | x_2) = 0.4 qquadqquad P(z | x_2) = 0.6 P(z | x_3) = 0.1 qquadqquad P(z | x_3) = 0.9 P(z | x_4) = 0.1 qquadqquad P(z | x_4) = 0.9 end{aligned}$$MCL uses these probabilities to generate particle weights, which aresubsequently normalized and used in the resampling process. Forsimplicity, let us assume we generate only one new sample in theresampling process, regardless of $N$. This sample might correspond toany of the four locations in $X$. Thus, the sampling process defines aprobability distribution over $X$.1. What is the resulting probability distribution over $X$ for this new sample? Answer this question separately for $N=1,ldots,10$, and for $N=infty$.2. The difference between two probability distributions $P$ and $Q$ can be measured by the KL divergence, which is defined as $${KL}(P,Q) = sum_i P(x_i)logfrac{P(x_i)}{Q(x_i)} .$$ What are the KL divergences between the distributions in (a) and the true posterior?3. What modification of the problem formulation (not the algorithm!) would guarantee that the specific estimator above is unbiased even for finite values of $N$? Provide at least two such modifications (each of which should be sufficient).", "url": " /robotics-exercises/ex_1/" } @@ -2482,7 +2482,7 @@ "bayesian-learning-exercises-ex-8": { "title": "Exercise 20.8", "breadcrumb": "20-Learning-Probabilistic-Models", - "content" : "This exercise investigates properties ofthe Beta distribution defined inEquation (beta-equation.1. By integrating over the range $[0,1]$, show that the normalization constant for the distribution $[a,b]$ is given by $alpha = Gamma(a+b)/Gamma(a)Gamma(b)$ where $Gamma(x)$ is the Gamma function, defined by $Gamma(x+1)xcdotGamma(x)$ and $Gamma(1)1$. (For integer $x$, $Gamma(x+1)x!$.)2. Show that the mean is $a/(a+b)$.3. Find the mode(s) (the most likely value(s) of $theta$).4. Describe the distribution $[epsilon,epsilon]$ for very small $epsilon$. What happens as such a distribution is updated?", + "content" : "This exercise investigates properties ofthe Beta distribution defined inEquation (beta-equation).1. By integrating over the range $[0,1]$, show that the normalization constant for the distribution $[a,b]$ is given by $alpha = Gamma(a+b)/Gamma(a)Gamma(b)$ where $Gamma(x)$ is the Gamma function, defined by $Gamma(x+1)xcdotGamma(x)$ and $Gamma(1)1$. (For integer $x$, $Gamma(x+1)x!$.)2. Show that the mean is $a/(a+b)$.3. Find the mode(s) (the most likely value(s) of $theta$).4. Describe the distribution $[epsilon,epsilon]$ for very small $epsilon$. What happens as such a distribution is updated?", "url": " /bayesian-learning-exercises/ex_8/" } @@ -3289,7 +3289,7 @@ "logical-inference-exercises-ex-14": { "title": "Exercise 9.14", "breadcrumb": "9-Inference-In-First-Order-Logic", - "content" : "Suppose we put into a logical knowledge base a segment of theU.S. census data listing the age, city of residence, date of birth, andmother of every person, using social security numbers as identifyingconstants for each person. Thus, George’s age is given by${Age}(mbox443-{65}-{1282}}, {56})$. Which of the followingindexing schemes S1–S5 enable an efficient solution for which of thequeries Q1–Q4 (assuming normal backward chaining)?- S1: an index for each atom in each position.- S2: an index for each first argument.- S3: an index for each predicate atom.- S4: an index for each combination of predicate and first argument.- S5: an index for each combination of predicate and second argument and an index for each first argument.- Q1: ${Age}(mbox 443-44-4321,x)$- Q2: ${ResidesIn}(x,{Houston})$- Q3: ${Mother}(x,y)$- Q4: ${Age}(x,{34}) land {ResidesIn}(x,{TinyTownUSA})$", + "content" : "Suppose we put into a logical knowledge base a segment of theU.S. census data listing the age, city of residence, date of birth, andmother of every person, using social security numbers as identifyingconstants for each person. Thus, George’s age is given by${Age}(443-65-1282, 56)$. Which of the followingindexing schemes S1–S5 enable an efficient solution for which of thequeries Q1–Q4 (assuming normal backward chaining)?- S1: an index for each atom in each position.- S2: an index for each first argument.- S3: an index for each predicate atom.- S4: an index for each combination of predicate and first argument.- S5: an index for each combination of predicate and second argument and an index for each first argument.- Q1: ${Age}(mbox 443-44-4321,x)$- Q2: ${ResidesIn}(x,{Houston})$- Q3: ${Mother}(x,y)$- Q4: ${Age}(x,{34}) land {ResidesIn}(x,{TinyTownUSA})$", "url": " /logical-inference-exercises/ex_14/" } @@ -3642,7 +3642,7 @@ "csp-exercises-ex-14": { "title": "Exercise 6.14", "breadcrumb": "6-Constraint-Satisfaction-Problems", - "content" : "AC-3 puts back on the queue *every* arc($X_{k}, X_{i}$) whenever *any* value is deleted from thedomain of $X_{i}$, even if each value of $X_{k}$ is consistent withseveral remaining values of $X_{i}$. Suppose that, for every arc($X_{k}, X_{i}$), we keep track of the number of remaining values of$X_{i}$ that are consistent with each value of $X_{k}$. Explain how toupdate these numbers efficiently and hence show that arc consistency canbe enforced in total time $O(n^2d^2)$.", + "content" : "AC-3 puts back on the queue every arc($X_{k}, X_{i}$) whenever any value is deleted from thedomain of $X_{i}$, even if each value of $X_{k}$ is consistent withseveral remaining values of $X_{i}$. Suppose that, for every arc($X_{k}, X_{i}$), we keep track of the number of remaining values of$X_{i}$ that are consistent with each value of $X_{k}$. Explain how toupdate these numbers efficiently and hence show that arc consistency canbe enforced in total time $O(n^2d^2)$.", "url": " /csp-exercises/ex_14/" } @@ -3707,7 +3707,7 @@ "bayes-nets-exercises-ex-16": { "title": "Exercise 14.16", "breadcrumb": "14-Probabilistic-Reasoning", - "content" : "Consider the Bayes net shown in Figure politics-figure.1. Which of the following are asserted by the network structure? 1. ${textbf{P}}(B,I,M) = {textbf{P}}(B){textbf{P}}(I){textbf{P}}(M)$. 2. ${textbf{P}}(JG) = {textbf{P}}(JG,I)$. 3. ${textbf{P}}(MG,B,I) = {textbf{P}}(MG,B,I,J)$.2. Calculate the value of $P(b,i,lnot m,g,j)$.3. Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.4. A context-specific independence (see page CSI-page) allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in Figure politics-figure?5. Suppose we want to add the variable $P{PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add. A simple Bayes net with Boolean variables B = {BrokeElectionLaw}, I = {Indicted}, M = {PoliticallyMotivatedProsecutor}, G= {FoundGuilty}, J = {Jailed}.", + "content" : "Consider the Bayes net shown in Figure politics-figure.1. Which of the following are asserted by the network structure? 1. ${textbf{P}}(B,I,M) = {textbf{P}}(B){textbf{P}}(I){textbf{P}}(M)$. 2. ${textbf{P}}(J|G) = {textbf{P}}(J|G,I)$. 3. ${textbf{P}}(M|G,B,I) = {textbf{P}}(M|G,B,I,J)$.2. Calculate the value of $P(b,i,lnot m,g,j)$.3. Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.4. A context-specific independence (see page CSI-page) allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in Figure politics-figure?5. Suppose we want to add the variable $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add. A simple Bayes net with Boolean variables B = {BrokeElectionLaw}, I = {Indicted}, M = {PoliticallyMotivatedProsecutor}, G= {FoundGuilty}, J = {Jailed}.", "url": " /bayes-nets-exercises/ex_16/" } @@ -3752,7 +3752,7 @@ "bayes-nets-exercises-ex-17": { "title": "Exercise 14.17", "breadcrumb": "14-Probabilistic-Reasoning", - "content" : "Consider the Bayes net shown in Figure politics-figure.1. Which of the following are asserted by the network structure? 1. ${textbf{P}}(B,I,M) = {textbf{P}}(B){textbf{P}}(I){textbf{P}}(M)$. 2. ${textbf{P}}(JG) = {textbf{P}}(JG,I)$. 3. ${textbf{P}}(MG,B,I) = {textbf{P}}(MG,B,I,J)$.2. Calculate the value of $P(b,i,lnot m,g,j)$.3. Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.4. A context-specific independence (see page CSI-page) allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in Figure politics-figure?5. Suppose we want to add the variable $P{PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.", + "content" : "Consider the Bayes net shown in Figure politics-figure.1. Which of the following are asserted by the network structure? 1. ${textbf{P}}(B,I,M) = {textbf{P}}(B){textbf{P}}(I){textbf{P}}(M)$. 2. ${textbf{P}}(J|G) = {textbf{P}}(J|G,I)$. 3. ${textbf{P}}(M|G,B,I) = {textbf{P}}(M|G,B,I,J)$.2. Calculate the value of $P(b,i,lnot m,g,j)$.3. Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.4. A context-specific independence (see page CSI-page) allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in Figure politics-figure?5. Suppose we want to add the variable $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.", "url": " /bayes-nets-exercises/ex_17/" } @@ -3779,7 +3779,7 @@ "bayes-nets-exercises-ex-4": { "title": "Exercise 14.4", "breadcrumb": "14-Probabilistic-Reasoning", - "content" : "The arc reversal operation of in a Bayesian network allows us to change the directionof an arc $Xrightarrow Y$ while preserving the joint probabilitydistribution that the network represents Shachter:1986. Arc reversalmay require introducing new arcs: all the parents of $X$ also becomeparents of $Y$, and all parents of $Y$ also become parents of $X$.1. Assume that $X$ and $Y$ start with $m$ and $n$ parents, respectively, and that all variables have $k$ values. By calculating the change in size for the CPTs of $X$ and $Y$, show that the total number of parameters in the network cannot decrease during arc reversal. (Hint: the parents of $X$ and $Y$ need not be disjoint.)2. Under what circumstances can the total number remain constant?3. Let the parents of $X$ be $textbf{U} cup textbf{V}$ and the parents of $Y$ be $textbf{V} cup textbf{W}$, where $textbf{U}$ and $textbf{W}$ are disjoint. The formulas for the new CPTs after arc reversal are as follows: $$begin{aligned} {textbf{P}}(Ytextbf{U},textbf{V},textbf{W}) &amp;=&amp; sum_x {textbf{P}}(Ytextbf{V},textbf{W}, x) {textbf{P}}(xtextbf{U}, textbf{V}) {textbf{P}}(Xtextbf{U},textbf{V},textbf{W}, Y) &amp;=&amp; {textbf{P}}(YX, textbf{V}, textbf{W}) {textbf{P}}(Xtextbf{U}, textbf{V}) / {textbf{P}}(Ytextbf{U},textbf{V},textbf{W}) .end{aligned}$$ Prove that the new network expresses the same joint distribution over all variables as the original network.", + "content" : "The arc reversal operation of in a Bayesian network allows us to change the directionof an arc $Xrightarrow Y$ while preserving the joint probabilitydistribution that the network represents Shachter:1986. Arc reversalmay require introducing new arcs: all the parents of $X$ also becomeparents of $Y$, and all parents of $Y$ also become parents of $X$.1. Assume that $X$ and $Y$ start with $m$ and $n$ parents, respectively, and that all variables have $k$ values. By calculating the change in size for the CPTs of $X$ and $Y$, show that the total number of parameters in the network cannot decrease during arc reversal. (Hint: the parents of $X$ and $Y$ need not be disjoint.)2. Under what circumstances can the total number remain constant?3. Let the parents of $X$ be $textbf{U} cup textbf{V}$ and the parents of $Y$ be $textbf{V} cup textbf{W}$, where $textbf{U}$ and $textbf{W}$ are disjoint. The formulas for the new CPTs after arc reversal are as follows: $$begin{aligned} {textbf{P}}(Y | textbf{U},textbf{V},textbf{W}) &amp;=&amp; sum_x {textbf{P}}(Y | textbf{V},textbf{W}, x) {textbf{P}}(x | textbf{U}, textbf{V}) {textbf{P}}(X | textbf{U},textbf{V},textbf{W}, Y) &amp;=&amp; {textbf{P}}(Y | X, textbf{V}, textbf{W}) {textbf{P}}(X | textbf{U}, textbf{V}) / {textbf{P}}(Y | textbf{U},textbf{V},textbf{W}) .end{aligned}$$ Prove that the new network expresses the same joint distribution over all variables as the original network.", "url": " /bayes-nets-exercises/ex_4/" } @@ -3970,7 +3970,7 @@ "advanced-search-exercises-ex-5": { "title": "Exercise 4.5", "breadcrumb": "4-Beyond-Classical-Search", - "content" : "The **And-Or-Graph-Search** algorithm inFigure and-or-graph-search-algorithm checks forrepeated states only on the path from the root to the current state.Suppose that, in addition, the algorithm were to store*every* visited state and check against that list. (See inFigure breadth-first-search-algorithm for an example.)Determine the information that should be stored and how the algorithmshould use that information when a repeated state is found.(*Hint*: You will need to distinguish at least betweenstates for which a successful subplan was constructed previously andstates for which no subplan could be found.) Explain how to use labels,as defined in Section cyclic-plan-section, to avoidhaving multiple copies of subplans.", + "content" : "The And-Or-Graph-Search algorithm inFigure and-or-graph-search-algorithm checks forrepeated states only on the path from the root to the current state.Suppose that, in addition, the algorithm were to storeevery visited state and check against that list. (See inFigure breadth-first-search-algorithm for an example.)Determine the information that should be stored and how the algorithmshould use that information when a repeated state is found.(*Hint*: You will need to distinguish at least betweenstates for which a successful subplan was constructed previously andstates for which no subplan could be found.) Explain how to use labels,as defined in Section cyclic-plan-section, to avoidhaving multiple copies of subplans.", "url": " /advanced-search-exercises/ex_5/" } @@ -3997,7 +3997,7 @@ "advanced-search-exercises-ex-12": { "title": "Exercise 4.12", "breadcrumb": "4-Beyond-Classical-Search", - "content" : "We can turn the navigation problem inExercise path-planning-exercise into an environment asfollows:- The percept will be a list of the positions, *relative to the agent*, of the visible vertices. The percept does *not* include the position of the robot! The robot must learn its own position from the map; for now, you can assume that each location has a different “view.”- Each action will be a vector describing a straight-line path to follow. If the path is unobstructed, the action succeeds; otherwise, the robot stops at the point where its path first intersects an obstacle. If the agent returns a zero motion vector and is at the goal (which is fixed and known), then the environment teleports the agent to a *random location* (not inside an obstacle).- The performance measure charges the agent 1 point for each unit of distance traversed and awards 1000 points each time the goal is reached.1. Implement this environment and a problem-solving agent for it. After each teleportation, the agent will need to formulate a new problem, which will involve discovering its current location.2. Document your agent’s performance (by having the agent generate suitable commentary as it moves around) and report its performance over 100 episodes.3. Modify the environment so that 30% of the time the agent ends up at an unintended destination (chosen randomly from the other visible vertices if any; otherwise, no move at all). This is a crude model of the motion errors of a real robot. Modify the agent so that when such an error is detected, it finds out where it is and then constructs a plan to get back to where it was and resume the old plan. Remember that sometimes getting back to where it was might also fail! Show an example of the agent successfully overcoming two successive motion errors and still reaching the goal.4. Now try two different recovery schemes after an error: (1) head for the closest vertex on the original route; and (2) replan a route to the goal from the new location. Compare the performance of the three recovery schemes. Would the inclusion of search costs affect the comparison?5. Now suppose that there are locations from which the view is identical. (For example, suppose the world is a grid with square obstacles.) What kind of problem does the agent now face? What do solutions look like?", + "content" : "We can turn the navigation problem inExercise path-planning-exercise into an environment asfollows:- The percept will be a list of the positions, relative to the agent, of the visible vertices. The percept does not include the position of the robot! The robot must learn its own position from the map; for now, you can assume that each location has a different “view.”- Each action will be a vector describing a straight-line path to follow. If the path is unobstructed, the action succeeds; otherwise, the robot stops at the point where its path first intersects an obstacle. If the agent returns a zero motion vector and is at the goal (which is fixed and known), then the environment teleports the agent to a random location (not inside an obstacle).- The performance measure charges the agent 1 point for each unit of distance traversed and awards 1000 points each time the goal is reached.1. Implement this environment and a problem-solving agent for it. After each teleportation, the agent will need to formulate a new problem, which will involve discovering its current location.2. Document your agent’s performance (by having the agent generate suitable commentary as it moves around) and report its performance over 100 episodes.3. Modify the environment so that 30% of the time the agent ends up at an unintended destination (chosen randomly from the other visible vertices if any; otherwise, no move at all). This is a crude model of the motion errors of a real robot. Modify the agent so that when such an error is detected, it finds out where it is and then constructs a plan to get back to where it was and resume the old plan. Remember that sometimes getting back to where it was might also fail! Show an example of the agent successfully overcoming two successive motion errors and still reaching the goal.4. Now try two different recovery schemes after an error: (1) head for the closest vertex on the original route; and (2) replan a route to the goal from the new location. Compare the performance of the three recovery schemes. Would the inclusion of search costs affect the comparison?5. Now suppose that there are locations from which the view is identical. (For example, suppose the world is a grid with square obstacles.) What kind of problem does the agent now face? What do solutions look like?", "url": " /advanced-search-exercises/ex_12/" } @@ -4006,7 +4006,7 @@ "advanced-search-exercises-ex-13": { "title": "Exercise 4.13", "breadcrumb": "4-Beyond-Classical-Search", - "content" : "Suppose that an agent is in a $3 times 3$maze environment like the one shown inFigure maze-3x3-figure. The agent knows that itsinitial location is (1,1), that the goal is at (3,3), and that theactions *Up*, *Down*, *Left*, *Right* have their usualeffects unless blocked by a wall. The agent does *not* knowwhere the internal walls are. In any given state, the agent perceivesthe set of legal actions; it can also tell whether the state is one ithas visited before.1. Explain how this online search problem can be viewed as an offline search in belief-state space, where the initial belief state includes all possible environment configurations. How large is the initial belief state? How large is the space of belief states?2. How many distinct percepts are possible in the initial state?3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?Notice that this contingency plan is a solution for *everypossible environment* fitting the given description. Therefore,interleaving of search and execution is not strictly necessary even inunknown environments.", + "content" : "Suppose that an agent is in a $3 times 3$maze environment like the one shown inFigure maze-3x3-figure. The agent knows that itsinitial location is (1,1), that the goal is at (3,3), and that theactions Up, Down, Left, Right have their usualeffects unless blocked by a wall. The agent does not knowwhere the internal walls are. In any given state, the agent perceivesthe set of legal actions; it can also tell whether the state is one ithas visited before.1. Explain how this online search problem can be viewed as an offline search in belief-state space, where the initial belief state includes all possible environment configurations. How large is the initial belief state? How large is the space of belief states?2. How many distinct percepts are possible in the initial state?3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?Notice that this contingency plan is a solution for everypossible environment fitting the given description. Therefore,interleaving of search and execution is not strictly necessary even inunknown environments.", "url": " /advanced-search-exercises/ex_13/" } @@ -4024,7 +4024,7 @@ "advanced-search-exercises-ex-9": { "title": "Exercise 4.9", "breadcrumb": "4-Beyond-Classical-Search", - "content" : "On page multivalued-sensorless-page it was assumedthat a given action would have the same cost when executed in anyphysical state within a given belief state. (This leads to abelief-state search problem with well-defined step costs.) Now considerwhat happens when the assumption does not hold. Does the notion ofoptimality still make sense in this context, or does it requiremodification? Consider also various possible definitions of the “cost”of executing an action in a belief state; for example, we could use the*minimum* of the physical costs; or the*maximum*; or a cost *interval* with the lowerbound being the minimum cost and the upper bound being the maximum; orjust keep the set of all possible costs for that action. For each ofthese, explore whether A* (with modifications if necessary) can returnoptimal solutions.", + "content" : "On page multivalued-sensorless-page it was assumedthat a given action would have the same cost when executed in anyphysical state within a given belief state. (This leads to abelief-state search problem with well-defined step costs.) Now considerwhat happens when the assumption does not hold. Does the notion ofoptimality still make sense in this context, or does it requiremodification? Consider also various possible definitions of the “cost”of executing an action in a belief state; for example, we could use theminimum of the physical costs; or themaximum; or a cost interval with the lowerbound being the minimum cost and the upper bound being the maximum; orjust keep the set of all possible costs for that action. For each ofthese, explore whether A* (with modifications if necessary) can returnoptimal solutions.", "url": " /advanced-search-exercises/ex_9/" } @@ -4051,7 +4051,7 @@ "advanced-search-exercises-ex-6": { "title": "Exercise 4.6", "breadcrumb": "4-Beyond-Classical-Search", - "content" : "Explain precisely how to modify the **And-Or-Graph-Search** algorithm togenerate a cyclic plan if no acyclic plan exists. You will need to dealwith three issues: labeling the plan steps so that a cyclic plan canpoint back to an earlier part of the plan, modifying **Or-Search** so that itcontinues to look for acyclic plans after finding a cyclic plan, andaugmenting the plan representation to indicate whether a plan is cyclic.Show how your algorithm works on (a) the slippery vacuum world, and (b)the slippery, erratic vacuum world. You might wish to use a computerimplementation to check your results.", + "content" : "Explain precisely how to modify the And-Or-Graph-Search algorithm togenerate a cyclic plan if no acyclic plan exists. You will need to dealwith three issues: labeling the plan steps so that a cyclic plan canpoint back to an earlier part of the plan, modifying Or-Search so that itcontinues to look for acyclic plans after finding a cyclic plan, andaugmenting the plan representation to indicate whether a plan is cyclic.Show how your algorithm works on (a) the slippery vacuum world, and (b)the slippery, erratic vacuum world. You might wish to use a computerimplementation to check your results.", "url": " /advanced-search-exercises/ex_6/" } @@ -4080,7 +4080,7 @@ "decision-theory-exercises-ex-16": { "title": "Exercise 16.16", "breadcrumb": "16-Making-Simple-Decisions", - "content" : "Alex is given the choice between two games. In Game 1, a fair coin isflipped and if it comes up heads, Alex receives $$$100$$. If the coin comesup tails, Alex receives nothing. In Game 2, a fair coin is flippedtwice. Each time the coin comes up heads, Alex receives $$$50$$, and Alexreceives nothing for each coin flip that comes up tails. Assuming thatAlex has a monotonically increasing utility function for money in therange [$0, $100], show mathematically that if Alex prefers Game 2 toGame 1, then Alex is risk averse (at least with respect to this range ofmonetary amounts).Show that if $X_1$ and $X_2$ are preferentially independent of $X_3$,and $X_2$ and $X_3$ are preferentially independent of $X_1$, then $X_3$and $X_1$ are preferentially independent of $X_2$.", + "content" : "Alex is given the choice between two games. In Game 1, a fair coin isflipped and if it comes up heads, Alex receives $$100$. If the coin comesup tails, Alex receives nothing. In Game 2, a fair coin is flippedtwice. Each time the coin comes up heads, Alex receives $$50$, and Alexreceives nothing for each coin flip that comes up tails. Assuming thatAlex has a monotonically increasing utility function for money in therange [$0, $100], show mathematically that if Alex prefers Game 2 toGame 1, then Alex is risk averse (at least with respect to this range ofmonetary amounts).Show that if $X_1$ and $X_2$ are preferentially independent of $X_3$,and $X_2$ and $X_3$ are preferentially independent of $X_1$, then $X_3$and $X_1$ are preferentially independent of $X_2$.", "url": " /decision-theory-exercises/ex_16/" } @@ -4179,7 +4179,7 @@ "decision-theory-exercises-ex-15": { "title": "Exercise 16.15", "breadcrumb": "16-Making-Simple-Decisions", - "content" : "Economists often make use of an exponential utility function for money:$U(x) = -e^{-x/R}$, where $R$ is a positive constant representing anindividual’s risk tolerance. Risk tolerance reflects how likely anindividual is to accept a lottery with a particular expected monetaryvalue (EMV) versus some certain payoff. As $R$ (which is measured in thesame units as $x$) becomes larger, the individual becomes lessrisk-averse.1. Assume Mary has an exponential utility function with $R = $400$. Mary is given the choice between receiving $$$400$$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning $5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.2. Consider the choice between receiving $$$100$$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning $500 and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to write a short program to help you solve this problem.)", + "content" : "Economists often make use of an exponential utility function for money:$U(x) = -e^{-x/R}$, where $R$ is a positive constant representing anindividual’s risk tolerance. Risk tolerance reflects how likely anindividual is to accept a lottery with a particular expected monetaryvalue (EMV) versus some certain payoff. As $R$ (which is measured in thesame units as $x$) becomes larger, the individual becomes lessrisk-averse.1. Assume Mary has an exponential utility function with $R = $400$. Mary is given the choice between receiving $$400$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning $5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.2. Consider the choice between receiving $$100$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning $500 and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to write a short program to help you solve this problem.)", "url": " /decision-theory-exercises/ex_15/" } @@ -4206,7 +4206,7 @@ "decision-theory-exercises-ex-22": { "title": "Exercise 16.22", "breadcrumb": "16-Making-Simple-Decisions", - "content" : "(Adapted from Pearl [Pearl:1988].) A used-carbuyer can decide to carry out various tests with various costs (e.g.,kick the tires, take the car to a qualified mechanic) and then,depending on the outcome of the tests, decide which car to buy. We willassume that the buyer is deciding whether to buy car $c_1$, that thereis time to carry out at most one test, and that $t_1$ is the test of$c_1$ and costs $50.A car can be in good shape (quality $$q^+$$) or bad shape (quality $q^-$),and the tests might help indicate what shape the car is in. Car $c_1$costs $1,500, and its market value is $$$2,000$$ if it is in good shape; ifnot, $$$700$$ in repairs will be needed to make it in good shape. The buyer’sestimate is that $c_1$ has a 70% chance of being in good shape.1. Draw the decision network that represents this problem.2. Calculate the expected net gain from buying $c_1$, given no test.3. Tests can be described by the probability that the car will pass or fail the test given that the car is in good or bad shape. We have the following information: $$P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$$ $$P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$$ Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.4. Calculate the optimal decisions given either a pass or a fail, and their expected utilities.5. Calculate the value of information of the test, and derive an optimal conditional plan for the buyer.", + "content" : "(Adapted from Pearl [Pearl:1988].) A used-carbuyer can decide to carry out various tests with various costs (e.g.,kick the tires, take the car to a qualified mechanic) and then,depending on the outcome of the tests, decide which car to buy. We willassume that the buyer is deciding whether to buy car $c_1$, that thereis time to carry out at most one test, and that $t_1$ is the test of$c_1$ and costs $50.A car can be in good shape (quality $q^+$) or bad shape (quality $q^-$),and the tests might help indicate what shape the car is in. Car $c_1$costs $1,500, and its market value is $$2,000$ if it is in good shape; ifnot, $$700$ in repairs will be needed to make it in good shape. The buyer’sestimate is that $c_1$ has a 70% chance of being in good shape.1. Draw the decision network that represents this problem.2. Calculate the expected net gain from buying $c_1$, given no test.3. Tests can be described by the probability that the car will pass or fail the test given that the car is in good or bad shape. We have the following information: $P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$ $P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$ Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.4. Calculate the optimal decisions given either a pass or a fail, and their expected utilities.5. Calculate the value of information of the test, and derive an optimal conditional plan for the buyer.", "url": " /decision-theory-exercises/ex_22/" } @@ -4224,7 +4224,7 @@ "decision-theory-exercises-ex-14": { "title": "Exercise 16.14", "breadcrumb": "16-Making-Simple-Decisions", - "content" : "Economists often make use of an exponential utility function for money:$U(x) = -e^{-x/R}$, where $R$ is a positive constant representing anindividual’s risk tolerance. Risk tolerance reflects how likely anindividual is to accept a lottery with a particular expected monetaryvalue (EMV) versus some certain payoff. As $R$ (which is measured in thesame units as $x$) becomes larger, the individual becomes lessrisk-averse.1. Assume Mary has an exponential utility function with $$R = $500$$. Mary is given the choice between receiving $$$500$$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning $5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.2. Consider the choice between receiving $$$100$$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning $$$500$$ and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to write a short program to help you solve this problem.)", + "content" : "Economists often make use of an exponential utility function for money:$U(x) = -e^{-x/R}$, where $R$ is a positive constant representing anindividual’s risk tolerance. Risk tolerance reflects how likely anindividual is to accept a lottery with a particular expected monetaryvalue (EMV) versus some certain payoff. As $R$ (which is measured in thesame units as $x$) becomes larger, the individual becomes lessrisk-averse.1. Assume Mary has an exponential utility function with $R = $500$. Mary is given the choice between receiving $$500$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning $5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.2. Consider the choice between receiving $$100$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning $$500$ and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to write a short program to help you solve this problem.)", "url": " /decision-theory-exercises/ex_14/" } @@ -4583,7 +4583,7 @@ "nlp-communicating-exercises-ex-2": { "title": "Exercise 22.2", "breadcrumb": "22-Natural-Language-Processing", - "content" : "Write a program to do **segmentation** ofwords without spaces. Given a string, such as the URL“thelongestlistofthelongeststuffatthelongestdomainnameatlonglast.com,”return a list of component words: [“the,” “longest,” “list,”$ldots$]. This task is useful for parsing URLs, for spellingcorrection when words runtogether, and for languages such as Chinesethat do not have spaces between words. It can be solved with a unigramor bigram word model and a dynamic programming algorithm similar to theViterbi algorithm.", + "content" : "Write a program to do segmentation ofwords without spaces. Given a string, such as the URL“thelongestlistofthelongeststuffatthelongestdomainnameatlonglast.com,”return a list of component words: [“the,” “longest,” “list,”$ldots$]. This task is useful for parsing URLs, for spellingcorrection when words runtogether, and for languages such as Chinesethat do not have spaces between words. It can be solved with a unigramor bigram word model and a dynamic programming algorithm similar to theViterbi algorithm.", "url": " /nlp-communicating-exercises/ex_2/" } @@ -4927,7 +4927,7 @@ "fol-exercises-ex-1": { "title": "Exercise 8.1", "breadcrumb": "8-First-Order-Logic", - "content" : "A logical knowledge base represents the world using a set of sentenceswith no explicit structure. An analogicalrepresentation, on the other hand, has physical structure thatcorresponds directly to the structure of the thing represented. Considera road map of your country as an analogical representation of factsabout the country—it represents facts with a map language. Thetwo-dimensional structure of the map corresponds to the two-dimensionalsurface of the area.1. Give five examples of *symbols* in the map language.2. An *explicit* sentence is a sentence that the creator of the representation actually writes down. An *implicit* sentence is a sentence that results from explicit sentences because of properties of the analogical representation. Give three examples each of *implicit* and *explicit* sentences in the map language.3. Give three examples of facts about the physical structure of your country that cannot be represented in the map language.4. Give two examples of facts that are much easier to express in the map language than in first-order logic.5. Give two other examples of useful analogical representations. What are the advantages and disadvantages of each of these languages?", + "content" : "A logical knowledge base represents the world using a set of sentenceswith no explicit structure. An analogicalrepresentation, on the other hand, has physical structure thatcorresponds directly to the structure of the thing represented. Considera road map of your country as an analogical representation of factsabout the country—it represents facts with a map language. Thetwo-dimensional structure of the map corresponds to the two-dimensionalsurface of the area.1. Give five examples of symbols in the map language.2. An explicit sentence is a sentence that the creator of the representation actually writes down. An implicit sentence is a sentence that results from explicit sentences because of properties of the analogical representation. Give three examples each of implicit and explicit sentences in the map language.3. Give three examples of facts about the physical structure of your country that cannot be represented in the map language.4. Give two examples of facts that are much easier to express in the map language than in first-order logic.5. Give two other examples of useful analogical representations. What are the advantages and disadvantages of each of these languages?", "url": " /fol-exercises/ex_1/" } @@ -5084,7 +5084,7 @@ "dbn-exercises-ex-17": { "title": "Exercise 15.17", "breadcrumb": "15-Probabilistic-Reasoning-Over-Time", - "content" : "For the DBN specified in Exercise sleep1-exercise andfor the evidence values$$textbf{e}_1 = notspace redspace eyes,space notspace sleepingspace inspace class$$$$textbf{e}_2 = redspace eyes,space notspace sleepingspace inspace class$$$$textbf{e}_3 = redspace eyes,space sleepingspace inspace class$$perform the following computations:1. State estimation: Compute $$P({EnoughSleep}_t | textbf{e}_{1:t})$$ for each of $t = 1,2,3$.2. Smoothing: Compute $$P({EnoughSleep}_t | textbf{e}_{1:3})$$ for each of $t = 1,2,3$.3. Compare the filtered and smoothed probabilities for $t=1$ and $t=2$.", + "content" : "For the DBN specified in Exercise sleep1-exercise andfor the evidence values$textbf{e}_1 = notspace redspace eyes,space notspace sleepingspace inspace class$$textbf{e}_2 = redspace eyes,space notspace sleepingspace inspace class$$textbf{e}_3 = redspace eyes,space sleepingspace inspace class$perform the following computations:1. State estimation: Compute $P({EnoughSleep}_t | textbf{e}_{1:t})$ for each of $t = 1,2,3$.2. Smoothing: Compute $P({EnoughSleep}_t | textbf{e}_{1:3})$ for each of $t = 1,2,3$.3. Compare the filtered and smoothed probabilities for $t=1$ and $t=2$.", "url": " /dbn-exercises/ex_17/" } @@ -5093,7 +5093,7 @@ "question-bank": { "title": "Question Bank", "breadcrumb": "questionbank", - "content" : " Exercise 1 Define in your own words: (a) intelligence, (b) artificial intelligence,(c) agent, (d) rationality, (e) logical reasoning. Exercise 2 Read Turing’s original paper on AI Turing:1950 .In the paper, he discusses several objections to his proposed enterprise and his test forintelligence. Which objections still carry weight? Are his refutationsvalid? Can you think of new objections arising from developments sincehe wrote the paper? In the paper, he predicts that, by the year 2000, acomputer will have a 30% chance of passing a five-minute Turing Testwith an unskilled interrogator. What chance do you think a computerwould have today? In another 50 years? Exercise 3 Every year the Loebner Prize is awarded to the program that comesclosest to passing a version of the Turing Test. Research and report onthe latest winner of the Loebner prize. What techniques does it use? Howdoes it advance the state of the art in AI? Exercise 4 Are reflex actions (such as flinching from a hot stove) rational? Arethey intelligent? Exercise 5 There are well-known classes of problems that are intractably difficultfor computers, and other classes that are provably undecidable. Doesthis mean that AI is impossible? Exercise 6 Suppose we extend Evans’s SYSTEM program so that it can score 200 on a standardIQ test. Would we then have a program more intelligent than a human?Explain. Exercise 7 The neural structure of the sea slug Aplysis has beenwidely studied (first by Nobel Laureate Eric Kandel) because it has onlyabout 20,000 neurons, most of them large and easily manipulated.Assuming that the cycle time for an Aplysis neuron isroughly the same as for a human neuron, how does the computationalpower, in terms of memory updates per second, compare with the high-endcomputer described in (Figure computer-brain-table)? Exercise 8 How could introspection—reporting on one’s inner thoughts—be inaccurate?Could I be wrong about what I’m thinking? Discuss. Exercise 9 To what extent are the following computer systems instances ofartificial intelligence:- Supermarket bar code scanners.- Web search engines.- Voice-activated telephone menus.- Internet routing algorithms that respond dynamically to the state of the network. Exercise 10 To what extent are the following computer systems instances ofartificial intelligence:- Supermarket bar code scanners.- Voice-activated telephone menus.- Spelling and grammar correction features in Microsoft Word.- Internet routing algorithms that respond dynamically to the state of the network. Exercise 11 Many of the computational models of cognitive activities that have beenproposed involve quite complex mathematical operations, such asconvolving an image with a Gaussian or finding a minimum of the entropyfunction. Most humans (and certainly all animals) never learn this kindof mathematics at all, almost no one learns it before college, andalmost no one can compute the convolution of a function with a Gaussianin their head. What sense does it make to say that the “vision system”is doing this kind of mathematics, whereas the actual person has no ideahow to do it? Exercise 12 Some authors have claimed that perception and motor skills are the mostimportant part of intelligence, and that “higher level” capacities arenecessarily parasitic—simple add-ons to these underlying facilities.Certainly, most of evolution and a large part of the brain have beendevoted to perception and motor skills, whereas AI has found tasks suchas game playing and logical inference to be easier, in many ways, thanperceiving and acting in the real world. Do you think that AI’straditional focus on higher-level cognitive abilities is misplaced? Exercise 13 Why would evolution tend to result in systems that act rationally? Whatgoals are such systems designed to achieve? Exercise 14 Is AI a science, or is it engineering? Or neither or both? Explain. Exercise 15 “Surely computers cannot be intelligent—they can do only what theirprogrammers tell them.” Is the latter statement true, and does it implythe former? Exercise 16 “Surely animals cannot be intelligent—they can do only what their genestell them.” Is the latter statement true, and does it imply the former? Exercise 17 “Surely animals, humans, and computers cannot be intelligent—they can doonly what their constituent atoms are told to do by the laws ofphysics.” Is the latter statement true, and does it imply the former? Exercise 18 Examine the AI literature to discover whether the following tasks cancurrently be solved by computers:- Playing a decent game of table tennis (Ping-Pong).- Driving in the center of Cairo, Egypt.- Driving in Victorville, California.- Buying a week’s worth of groceries at the market.- Buying a week’s worth of groceries on the Web.- Playing a decent game of bridge at a competitive level.- Discovering and proving new mathematical theorems.- Writing an intentionally funny story.- Giving competent legal advice in a specialized area of law.- Translating spoken English into spoken Swedish in real time.- Performing a complex surgical operation. Exercise 19 For the currently infeasible tasks, try to find out what thedifficulties are and predict when, if ever, they will be overcome. Exercise 20 Various subfields of AI have held contests by defining a standard taskand inviting researchers to do their best. Examples include the DARPAGrand Challenge for robotic cars, the International PlanningCompetition, the Robocup robotic soccer league, the TREC informationretrieval event, and contests in machine translation and speechrecognition. Investigate five of these contests and describe theprogress made over the years. To what degree have the contests advancedthe state of the art in AI? To what degree do they hurt the field bydrawing energy away from new ideas? Exercise 21 Suppose that the performance measure is concerned with just the first$T$ time steps of the environment and ignores everything thereafter.Show that a rational agent’s action may depend not just on the state ofthe environment but also on the time step it has reached. Exercise 22 (vacuum-rationality-exercise) Let us examine the rationality of variousvacuum-cleaner agent functions.1. Show that the simple vacuum-cleaner agent function described in Figure vacuum-agent-function-table is indeed rational under the assumptions listed on page vacuum-rationality-page2. Describe a rational agent function for the case in which each movement costs one point. Does the corresponding agent program require internal state?3. Discuss possible agent designs for the cases in which clean squares can become dirty and the geography of the environment is unknown. Does it make sense for the agent to learn from its experience in these cases? If so, what should it learn? If not, why not? Exercise 23 Write an essay on the relationship between evolution and one or more ofautonomy, intelligence, and learning. Exercise 24 For each of the following assertions, say whether it is true or falseand support your answer with examples or counterexamples whereappropriate.1. An agent that senses only partial information about the state cannot be perfectly rational.2. There exist task environments in which no pure reflex agent can behave rationally.3. There exists a task environment in which every agent is rational.4. The input to an agent program is the same as the input to the agent function.5. Every agent function is implementable by some program/machine combination.6. Suppose an agent selects its action uniformly at random from the set of possible actions. There exists a deterministic task environment in which this agent is rational.7. It is possible for a given agent to be perfectly rational in two distinct task environments.8. Every agent is rational in an unobservable environment.9. A perfectly rational poker-playing agent never loses. Exercise 25 (PEAS-exercise) For each of the following activities, give a PEASdescription of the task environment and characterize it in terms of theproperties listed in Section env-properties-subsection- Playing soccer.- Exploring the subsurface oceans of Titan.- Shopping for used AI books on the Internet.- Playing a tennis match.- Practicing tennis against a wall.- Performing a high jump.- Knitting a sweater.- Bidding on an item at an auction. Exercise 26 For each of the following activities, give a PEASdescription of the task environment and characterize it in terms of theproperties listed in Section env-properties-subsection- Performing a gymnastics floor routine.- Exploring the subsurface oceans of Titan.- Playing soccer.- Shopping for used AI books on the Internet.- Practicing tennis against a wall.- Performing a high jump.- Bidding on an item at an auction. Exercise 27 (agent-fn-prog-exercise) Define in your own words the following terms: agent, agent function,agent program, rationality, autonomy, reflex agent, model-based agent,goal-based agent, utility-based agent, learning agent. Exercise 28 This exercise explores the differences betweenagent functions and agent programs.1. Can there be more than one agent program that implements a given agent function? Give an example, or show why one is not possible.2. Are there agent functions that cannot be implemented by any agent program?3. Given a fixed machine architecture, does each agent program implement exactly one agent function?4. Given an architecture with $n$ bits of storage, how many different possible agent programs are there?5. Suppose we keep the agent program fixed but speed up the machine by a factor of two. Does that change the agent function? Exercise 29 Write pseudocode agent programs for the goal-based and utility-basedagents. The following exercises all concern the implementation of environmentsand agents for the vacuum-cleaner world. Exercise 30 (vacuum-start-exercise) Consider a simple thermostat that turns on a furnace when thetemperature is at least 3 degrees below the setting, and turns off afurnace when the temperature is at least 3 degrees above the setting. Isa thermostat an instance of a simple reflex agent, a model-based reflexagent, or a goal-based agent? Exercise 31 Implement a performance-measuring environmentsimulator for the vacuum-cleaner world depicted inFigure vacuum-world-figure and specified onpage vacuum-rationality-page. Your implementation should be modular so that thesensors, actuators, and environment characteristics (size, shape, dirtplacement, etc.) can be changed easily. (Note: for somechoices of programming language and operating system there are alreadyimplementations in the online code repository.) Exercise 32 (vacuum-motion-penalty-exercise) Implement a simple reflex agent for the vacuum environment inExercise vacuum-start-exercise. Run the environmentwith this agent for all possible initial dirt configurations and agentlocations. Record the performance score for each configuration and theoverall average score. Exercise 33 (vacuum-unknown-geog-exercise) Consider a modified version of thevacuum environment in Exercise vacuum-start-exercise,in which the agent is penalized one point for each movement.1. Can a simple reflex agent be perfectly rational for this environment? Explain.2. What about a reflex agent with state? Design such an agent.3. How do your answers to 1 and 2 change if the agent’s percepts give it the clean/dirty status of every square in the environment? Exercise 34 (vacuum-bump-exercise) Consider a modified version of thevacuum environment in Exercise vacuum-start-exercise,in which the geography of the environment—its extent, boundaries, andobstacles—is unknown, as is the initial dirt configuration. (The agentcan go Up and Down as well as Left and Right.)1. Can a simple reflex agent be perfectly rational for this environment? Explain.2. Can a simple reflex agent with a randomized agent function outperform a simple reflex agent? Design such an agent and measure its performance on several environments.3. Can you design an environment in which your randomized agent will perform poorly? Show your results.4. Can a reflex agent with state outperform a simple reflex agent? Design such an agent and measure its performance on several environments. Can you design a rational agent of this type? Exercise 35 (vacuum-finish-exercise) Repeat Exercise vacuum-unknown-geog-exercise for the case inwhich the location sensor is replaced with a “bump” sensor that detectsthe agent’s attempts to move into an obstacle or to cross the boundariesof the environment. Suppose the bump sensor stops working; how shouldthe agent behave? Exercise 36 Explain why problem formulation must follow goal formulation. Exercise 37 Give a complete problem formulation for each of the following problems.Choose a formulation that is precise enough to be implemented.1. There are six glass boxes in a row, each with a lock. Each of the first five boxes holds a key unlocking the next box in line; the last box holds a banana. You have the key to the first box, and you want the banana.2. You start with the sequence ABABAECCEC, or in general any sequence made from A, B, C, and E. You can transform this sequence using the following equalities: AC = E, AB = BC, BB = E, and E$x$ = $x$ for any $x$. For example, ABBC can be transformed into AEC, and then AC, and then E. Your goal is to produce the sequence E.3. There is an $n times n$ grid of squares, each square initially being either unpainted floor or a bottomless pit. You start standing on an unpainted floor square, and can either paint the square under you or move onto an adjacent unpainted floor square. You want the whole floor painted.4. A container ship is in port, loaded high with containers. There 13 rows of containers, each 13 containers wide and 5 containers tall. You control a crane that can move to any location above the ship, pick up the container under it, and move it onto the dock. You want the ship unloaded. Exercise 38 Your goal is to navigate a robot out of a maze. The robot starts in thecenter of the maze facing north. You can turn the robot to face north,east, south, or west. You can direct the robot to move forward a certaindistance, although it will stop before hitting a wall.1. Formulate this problem. How large is the state space?2. In navigating a maze, the only place we need to turn is at the intersection of two or more corridors. Reformulate this problem using this observation. How large is the state space now?3. From each point in the maze, we can move in any of the four directions until we reach a turning point, and this is the only action we need to do. Reformulate the problem using these actions. Do we need to keep track of the robot’s orientation now?4. In our initial description of the problem we already abstracted from the real world, restricting actions and removing details. List three such simplifications we made. Exercise 39 You have a $9 times 9$ grid of squares, each of which can be coloredred or blue. The grid is initially colored all blue, but you can changethe color of any square any number of times. Imagining the grid dividedinto nine $3 times 3$ sub-squares, you want each sub-square to be allone color but neighboring sub-squares to be different colors.1. Formulate this problem in the straightforward way. Compute the size of the state space.2. You need color a square only once. Reformulate, and compute the size of the state space. Would breadth-first graph search perform faster on this problem than on the one in (a)? How about iterative deepening tree search?3. Given the goal, we need consider only colorings where each sub-square is uniformly colored. Reformulate the problem and compute the size of the state space.4. How many solutions does this problem have?5. Parts (b) and (c) successively abstracted the original problem (a). Can you give a translation from solutions in problem (c) into solutions in problem (b), and from solutions in problem (b) into solutions for problem (a)? Exercise 40 (two-friends-exercise) Suppose two friends live in different cities ona map, such as the Romania map shown in . On every turn, we cansimultaneously move each friend to a neighboring city on the map. Theamount of time needed to move from city $i$ to neighbor $j$ is equal tothe road distance $d(i,j)$ between the cities, but on each turn thefriend that arrives first must wait until the other one arrives (andcalls the first on his/her cell phone) before the next turn can begin.We want the two friends to meet as quickly as possible.1. Write a detailed formulation for this search problem. (You will find it helpful to define some formal notation here.)2. Let $D(i,j)$ be the straight-line distance between cities $i$ and $j$. Which of the following heuristic functions are admissible? (i) $D(i,j)$; (ii) $2cdot D(i,j)$; (iii) $D(i,j)/2$. 3. Are there completely connected maps for which no solution exists? 4. Are there maps in which all solutions require one friend to visit the same city twice? Exercise 41 (8puzzle-parity-exercise) Show that the 8-puzzle states are dividedinto two disjoint sets, such that any state is reachable from any otherstate in the same set, while no state is reachable from any state in theother set. (Hint: See Berlekamp+al:1982) Devise a procedure to decidewhich set a given state is in, and explain why this is useful forgenerating random states. Exercise 42 (nqueens-size-exercise) Consider the $n$-queens problem using the“efficient” incremental formulation given on page nqueens-page. Explain why the statespace has at least $sqrt[3]{n!}$ states and estimate the largest $n$for which exhaustive exploration is feasible. (Hint:Derive a lower bound on the branching factor by considering the maximumnumber of squares that a queen can attack in any column.) Exercise 43 Give a complete problem formulation for each of the following. Choose aformulation that is precise enough to be implemented.1. Using only four colors, you have to color a planar map in such a way that no two adjacent regions have the same color.2. A 3-foot-tall monkey is in a room where some bananas are suspended from the 8-foot ceiling. He would like to get the bananas. The room contains two stackable, movable, climbable 3-foot-high crates.3. You have a program that outputs the message “illegal input record” when fed a certain file of input records. You know that processing of each record is independent of the other records. You want to discover what record is illegal.4. You have three jugs, measuring 12 gallons, 8 gallons, and 3 gallons, and a water faucet. You can fill the jugs up or empty them out from one to another or onto the ground. You need to measure out exactly one gallon. Exercise 44 (path-planning-exercise) Consider the problem of finding the shortestpath between two points on a plane that has convex polygonal obstaclesas shown in . This is an idealization of the problem that a robot has tosolve to navigate in a crowded environment.1. Suppose the state space consists of all positions $(x,y)$ in the plane. How many states are there? How many paths are there to the goal?2. Explain briefly why the shortest path from one polygon vertex to any other in the scene must consist of straight-line segments joining some of the vertices of the polygons. Define a good state space now. How large is this state space?3. Define the necessary functions to implement the search problem, including an function that takes a vertex as input and returns a set of vectors, each of which maps the current vertex to one of the vertices that can be reached in a straight line. (Do not forget the neighbors on the same polygon.) Use the straight-line distance for the heuristic function.4. Apply one or more of the algorithms in this chapter to solve a range of problems in the domain, and comment on their performance. Exercise 45 (negative-g-exercise) On page non-negative-g, we said that we would not consider problemswith negative path costs. In this exercise, we explore this decision inmore depth.1. Suppose that actions can have arbitrarily large negative costs; explain why this possibility would force any optimal algorithm to explore the entire state space.2. Does it help if we insist that step costs must be greater than or equal to some negative constant $c$? Consider both trees and graphs.3. Suppose that a set of actions forms a loop in the state space such that executing the set in some order results in no net change to the state. If all of these actions have negative cost, what does this imply about the optimal behavior for an agent in such an environment?4. One can easily imagine actions with high negative cost, even in domains such as route finding. For example, some stretches of road might have such beautiful scenery as to far outweigh the normal costs in terms of time and fuel. Explain, in precise terms, within the context of state-space search, why humans do not drive around scenic loops indefinitely, and explain how to define the state space and actions for route finding so that artificial agents can also avoid looping.5. Can you think of a real domain in which step costs are such as to cause looping? Exercise 46 (mc-problem) The problem is usually stated as follows. Threemissionaries and three cannibals are on one side of a river, along witha boat that can hold one or two people. Find a way to get everyone tothe other side without ever leaving a group of missionaries in one placeoutnumbered by the cannibals in that place. This problem is famous in AIbecause it was the subject of the first paper that approached problemformulation from an analytical viewpoint Amarel:1968. 1. Formulate the problem precisely, making only those distinctions necessary to ensure a valid solution. Draw a diagram of the complete state space.2. Implement and solve the problem optimally using an appropriate search algorithm. Is it a good idea to check for repeated states? 3. Why do you think people have a hard time solving this puzzle, given that the state space is so simple? Exercise 47 Define in your own words the following terms: state, state space, searchtree, search node, goal, action, transition model, and branching factor. Exercise 48 What’s the difference between a world state, a state description, and asearch node? Why is this distinction useful? Exercise 49 An action such as really consists of a long sequence of finer-grainedactions: turn on the car, release the brake, accelerate forward, etc.Having composite actions of this kind reduces the number of steps in asolution sequence, thereby reducing the search time. Suppose we takethis to the logical extreme, by making super-composite actions out ofevery possible sequence of actions. Then every problem instance issolved by a single super-composite action, such as . Explain how searchwould work in this formulation. Is this a practical approach forspeeding up problem solving? Exercise 50 Does a finite state space always lead to a finite search tree? How abouta finite state space that is a tree? Can you be more precise about whattypes of state spaces always lead to finite search trees? (Adapted from, 1996.) Exercise 51 (graph-separation-property-exercise) Prove that satisfies the graphseparation property illustrated in . (Hint: Begin byshowing that the property holds at the start, then show that if it holdsbefore an iteration of the algorithm, it holds afterwards.) Describe asearch algorithm that violates the property. Exercise 52 Which of the following are true and which are false? Explain youranswers.1. Depth-first search always expands at least as many nodes as A search with an admissible heuristic. 2. $h(n)=0$ is an admissible heuristic for the 8-puzzle. 3. A is of no use in robotics because percepts, states, and actions are continuous.4. Breadth-first search is complete even if zero step costs are allowed. 5. Assume that a rook can move on a chessboard any number of squares in a straight line, vertically or horizontally, but cannot jump over other pieces. Manhattan distance is an admissible heuristic for the problem of moving the rook from square A to square B in the smallest number of moves. Exercise 53 Consider a state space where the start state is number 1 and each state$k$ has two successors: numbers $2k$ and $2k+1$. 1. Draw the portion of the state space for states 1 to 15. 2. Suppose the goal state is 11. List the order in which nodes will be visited for breadth-first search, depth-limited search with limit 3, and iterative deepening search. 3. How well would bidirectional search work on this problem? What is the branching factor in each direction of the bidirectional search?4. Does the answer to (c) suggest a reformulation of the problem that would allow you to solve the problem of getting from state 1 to a given goal state with almost no search? 5. Call the action going from $k$ to $2k$ Left, and the action going to $2k+1$ Right. Can you find an algorithm that outputs the solution to this problem without any search at all? Exercise 54 (brio-exercise) A basic wooden railway set contains the pieces shown in. The task is to connect these pieces into a railway that has nooverlapping tracks and no loose ends where a train could run off ontothe floor.1. Suppose that the pieces fit together exactly with no slack. Give a precise formulation of the task as a search problem.2. Identify a suitable uninformed search algorithm for this task and explain your choice.3. Explain why removing any one of the “fork” pieces makes the problem unsolvable. 4. Give an upper bound on the total size of the state space defined by your formulation. (Hint: think about the maximum branching factor for the construction process and the maximum depth, ignoring the problem of overlapping pieces and loose ends. Begin by pretending that every piece is unique.) Exercise 55 Implement two versions of the function for the 8-puzzle: one that copiesand edits the data structure for the parent node $s$ and one thatmodifies the parent state directly (undoing the modifications asneeded). Write versions of iterative deepening depth-first search thatuse these functions and compare their performance. Exercise 56 (iterative-lengthening-exercise) On page iterative-lengthening-page,we mentioned iterative lengthening search,an iterative analog of uniform cost search. The idea is to use increasing limits onpath cost. If a node is generated whose path cost exceeds the currentlimit, it is immediately discarded. For each new iteration, the limit isset to the lowest path cost of any node discarded in the previousiteration.1. Show that this algorithm is optimal for general path costs.2. Consider a uniform tree with branching factor $b$, solution depth $d$, and unit step costs. How many iterations will iterative lengthening require?3. Now consider step costs drawn from the continuous range $[epsilon,1]$, where $0 &lt; epsilon &lt; 1$. How many iterations are required in the worst case? 4. Implement the algorithm and apply it to instances of the 8-puzzle and traveling salesperson problems. Compare the algorithm’s performance to that of uniform-cost search, and comment on your results. Exercise 57 Describe a state space in which iterative deepening search performs muchworse than depth-first search (for example, $O(n^{2})$ vs. $O(n)$). Exercise 58 Write a program that will take as input two Web page URLs and find apath of links from one to the other. What is an appropriate searchstrategy? Is bidirectional search a good idea? Could a search engine beused to implement a predecessor function? Exercise 59 (vacuum-search-exercise) Consider the vacuum-world problem defined in .1. Which of the algorithms defined in this chapter would be appropriate for this problem? Should the algorithm use tree search or graph search?2. Apply your chosen algorithm to compute an optimal sequence of actions for a $3times 3$ world whose initial state has dirt in the three top squares and the agent in the center.3. Construct a search agent for the vacuum world, and evaluate its performance in a set of $3times 3$ worlds with probability 0.2 of dirt in each square. Include the search cost as well as path cost in the performance measure, using a reasonable exchange rate.4. Compare your best search agent with a simple randomized reflex agent that sucks if there is dirt and otherwise moves randomly.5. Consider what would happen if the world were enlarged to $n times n$. How does the performance of the search agent and of the reflex agent vary with $n$? Exercise 60 (search-special-case-exercise) Prove each of the following statements,or give a counterexample: 1. Breadth-first search is a special case of uniform-cost search.2. Depth-first search is a special case of best-first tree search.3. Uniform-cost search is a special case of A search. Exercise 61 Compare the performance of A and RBFS on a set of randomly generatedproblems in the 8-puzzle (with Manhattan distance) and TSP (with MST—see) domains. Discuss your results. What happens to the performance of RBFSwhen a small random number is added to the heuristic values in the8-puzzle domain? Exercise 62 Trace the operation of A search applied to the problem of getting toBucharest from Lugoj using the straight-line distance heuristic. Thatis, show the sequence of nodes that the algorithm will consider and the$f$, $g$, and $h$ score for each node. Exercise 63 Sometimes there is no good evaluation function for a problem but thereis a good comparison method: a way to tell whether one node is betterthan another without assigning numerical values to either. Show thatthis is enough to do a best-first search. Is there an analog of A forthis setting? Exercise 64 (failure-exercise) Devise a state space in which A using returns asuboptimal solution with an $h(n)$ function that is admissible butinconsistent. Exercise 65 Accurate heuristics don’t necessarily reduce search time in the worstcase. Given any depth $d$, define a search problem with a goal node atdepth $d$, and write a heuristic function such that $|h(n) - h^*(n)| le O(log h^*(n))$ but $A^*$ expands all nodes of depth lessthan $d$. Exercise 66 The heuristic path algorithm Pohl:1977 is a best-first search in which the evaluation functionis $f(n) =(2-w)g(n) + wh(n)$. For what values of $w$ is this complete? For whatvalues is it optimal, assuming that $h$ is admissible? What kind ofsearch does this perform for $w=0$, $w=1$, and $w=2$? Exercise 67 Consider the unbounded version of the regular 2D grid shown in . Thestart state is at the origin, (0,0), and the goal state is at $(x,y)$.1. What is the branching factor $b$ in this state space?2. How many distinct states are there at depth $k$ (for $k&gt;0$)?3. What is the maximum number of nodes expanded by breadth-first tree search?4. What is the maximum number of nodes expanded by breadth-first graph search?5. Is $h = |u-x| + |v-y|$ an admissible heuristic for a state at $(u,v)$? Explain.6. How many nodes are expanded by A graph search using $h$?7. Does $h$ remain admissible if some links are removed?8. Does $h$ remain admissible if some links are added between nonadjacent states? Exercise 68 $n$ vehicles occupy squares $(1,1)$ through $(n,1)$ (i.e., the bottomrow) of an $ntimes n$ grid. The vehicles must be moved to the top rowbut in reverse order; so the vehicle $i$ that starts in $(i,1)$ must endup in $(n-i+1,n)$. On each time step, every one of the $n$ vehicles canmove one square up, down, left, or right, or stay put; but if a vehiclestays put, one other adjacent vehicle (but not more than one) can hopover it. Two vehicles cannot occupy the same square. 1. Calculate the size of the state space as a function of $n$.2. Calculate the branching factor as a function of $n$.3. Suppose that vehicle $i$ is at $(x_i,y_i)$; write a nontrivial admissible heuristic $h_i$ for the number of moves it will require to get to its goal location $(n-i+1,n)$, assuming no other vehicles are on the grid.4. Which of the following heuristics are admissible for the problem of moving all $n$ vehicles to their destinations? Explain. 1. $sum_{i= 1}^{n} h_i$. 2. $max{h_1,ldots,h_n}$. 3. $min{h_1,ldots,h_n}$. Exercise 69 Consider the problem of moving $k$ knights from $k$ starting squares$s_1,ldots,s_k$ to $k$ goal squares $g_1,ldots,g_k$, on an unboundedchessboard, subject to the rule that no two knights can land on the samesquare at the same time. Each action consists of moving upto $k$ knights simultaneously. We would like to complete themaneuver in the smallest number of actions.1. What is the maximum branching factor in this state space, expressed as a function of $k$?2. Suppose $h_i$ is an admissible heuristic for the problem of moving knight $i$ to goal $g_i$ by itself. Which of the following heuristics are admissible for the $k$-knight problem? Of those, which is the best? 1. $min{h_1,ldots,h_k}$. 2. $max{h_1,ldots,h_k}$. 3. $sum_{i= 1}^{k} h_i$.3. Repeat (b) for the case where you are allowed to move only one knight at a time. Exercise 70 We saw on page I-to-F that the straight-line distance heuristic leads greedybest-first search astray on the problem of going from Iasi to Fagaras.However, the heuristic is perfect on the opposite problem: going fromFagaras to Iasi. Are there problems for which the heuristic ismisleading in both directions? Exercise 71 Invent a heuristic function for the 8-puzzle that sometimesoverestimates, and show how it can lead to a suboptimal solution on aparticular problem. (You can use a computer to help if you want.) Provethat if $h$ never overestimates by more than $c$, A using $h$ returns asolution whose cost exceeds that of the optimal solution by no more than$c$. Exercise 72 Prove that if a heuristic isconsistent, it must be admissible. Construct an admissible heuristicthat is not consistent. Exercise 73 The traveling salesperson problem (TSP) can besolved with the minimum-spanning-tree (MST) heuristic, which estimatesthe cost of completing a tour, given that a partial tour has alreadybeen constructed. The MST cost of a set of cities is the smallest sum ofthe link costs of any tree that connects all the cities.1. Show how this heuristic can be derived from a relaxed version of the TSP.2. Show that the MST heuristic dominates straight-line distance.3. Write a problem generator for instances of the TSP where cities are represented by random points in the unit square.4. Find an efficient algorithm in the literature for constructing the MST, and use it with A graph search to solve instances of the TSP. Exercise 74 (Gaschnig-h-exercise) On page Gaschnig-h-page , we defined the relaxation of the 8-puzzle inwhich a tile can move from square A to square B if B is blank. The exactsolution of this problem defines Gaschnig's heuristic Gaschnig:1979. Explain why Gaschnig’sheuristic is at least as accurate as $h_1$ (misplaced tiles), and showcases where it is more accurate than both $h_1$ and $h_2$ (Manhattandistance). Explain how to calculate Gaschnig’s heuristic efficiently. Exercise 75 We gave two simple heuristics for the 8-puzzle: Manhattan distance andmisplaced tiles. Several heuristics in the literature purport to improveon this—see, for example, Nilsson:1971,Mostow+Prieditis:1989, and Hansson+al:1992. Test these claims by implementingthe heuristics and comparing the performance of the resultingalgorithms. Exercise 1 Give the name of the algorithm that results from each of the followingspecial cases:1. Local beam search with $k = 1$.2. Local beam search with one initial state and no limit on the number of states retained.3. Simulated annealing with $T = 0$ at all times (and omitting the termination test).4. Simulated annealing with $T=infty$ at all times.5. Genetic algorithm with population size $N = 1$. Exercise 2 Exercise brio-exercise considers the problem ofbuilding railway tracks under the assumption that pieces fit exactlywith no slack. Now consider the real problem, in which pieces don’t fitexactly but allow for up to 10 degrees of rotation to either side of the“proper” alignment. Explain how to formulate the problem so it could besolved by simulated annealing. Exercise 3 In this exercise, we explore the use of local search methods to solveTSPs of the type defined in Exercise tsp-mst-exercise1. Implement and test a hill-climbing method to solve TSPs. Compare the results with optimal solutions obtained from the A* algorithm with the MST heuristic (Exercise tsp-mst-exercise)2. Repeat part (a) using a genetic algorithm instead of hill climbing. You may want to consult @Larranaga+al:1999 for some suggestions for representations. Exercise 4 (hill-climbing-exercise) Generate a large number of 8-puzzle and8-queens instances and solve them (where possible) by hill climbing(steepest-ascent and first-choice variants), hill climbing with randomrestart, and simulated annealing. Measure the search cost and percentageof solved problems and graph these against the optimal solution cost.Comment on your results. Exercise 5 (cond-plan-repeated-exercise) The **And-Or-Graph-Search** algorithm inFigure and-or-graph-search-algorithm checks forrepeated states only on the path from the root to the current state.Suppose that, in addition, the algorithm were to store*every* visited state and check against that list. (See inFigure breadth-first-search-algorithm for an example.)Determine the information that should be stored and how the algorithmshould use that information when a repeated state is found.(*Hint*: You will need to distinguish at least betweenstates for which a successful subplan was constructed previously andstates for which no subplan could be found.) Explain how to use labels,as defined in Section cyclic-plan-section, to avoidhaving multiple copies of subplans. Exercise 6 (cond-loop-exercise) Explain precisely how to modify the **And-Or-Graph-Search** algorithm togenerate a cyclic plan if no acyclic plan exists. You will need to dealwith three issues: labeling the plan steps so that a cyclic plan canpoint back to an earlier part of the plan, modifying **Or-Search** so that itcontinues to look for acyclic plans after finding a cyclic plan, andaugmenting the plan representation to indicate whether a plan is cyclic.Show how your algorithm works on (a) the slippery vacuum world, and (b)the slippery, erratic vacuum world. You might wish to use a computerimplementation to check your results. Exercise 7 In Section conformant-section we introduced beliefstates to solve sensorless search problems. A sequence of actions solvesa sensorless problem if it maps every physical state in the initialbelief state $b$ to a goal state. Suppose the agent knows $h^*(s)$, thetrue optimal cost of solving the physical state $s$ in the fullyobservable problem, for every state $s$ in $b$. Find an admissibleheuristic $h(b)$ for the sensorless problem in terms of these costs, andprove its admissibilty. Comment on the accuracy of this heuristic on thesensorless vacuum problem ofFigure vacuum2-sets-figure. How well does A* perform? Exercise 8 (belief-state-superset-exercise) This exercise exploressubset–superset relations between belief states in sensorless orpartially observable environments.1. Prove that if an action sequence is a solution for a belief state $b$, it is also a solution for any subset of $b$. Can anything be said about supersets of $b$?2. Explain in detail how to modify graph search for sensorless problems to take advantage of your answers in (a).3. Explain in detail how to modify and–or search for partially observable problems, beyond the modifications you describe in (b). Exercise 9 (multivalued-sensorless-exercise) On page multivalued-sensorless-page it was assumedthat a given action would have the same cost when executed in anyphysical state within a given belief state. (This leads to abelief-state search problem with well-defined step costs.) Now considerwhat happens when the assumption does not hold. Does the notion ofoptimality still make sense in this context, or does it requiremodification? Consider also various possible definitions of the “cost”of executing an action in a belief state; for example, we could use the*minimum* of the physical costs; or the*maximum*; or a cost *interval* with the lowerbound being the minimum cost and the upper bound being the maximum; orjust keep the set of all possible costs for that action. For each ofthese, explore whether A* (with modifications if necessary) can returnoptimal solutions. Exercise 10 (vacuum-solvable-exercise) Consider the sensorless version of theerratic vacuum world. Draw the belief-state space reachable from theinitial belief state ${1,2,3,4,5,6,7,8}$, and explain why theproblem is unsolvable. Exercise 11 (vacuum-solvable-exercise) Consider the sensorless version of theerratic vacuum world. Draw the belief-state space reachable from theinitial belief state ${ 1,3,5,7 }$, and explain why the problemis unsolvable. Exercise 12 (path-planning-agent-exercise) We can turn the navigation problem inExercise path-planning-exercise into an environment asfollows:- The percept will be a list of the positions, *relative to the agent*, of the visible vertices. The percept does *not* include the position of the robot! The robot must learn its own position from the map; for now, you can assume that each location has a different “view.”- Each action will be a vector describing a straight-line path to follow. If the path is unobstructed, the action succeeds; otherwise, the robot stops at the point where its path first intersects an obstacle. If the agent returns a zero motion vector and is at the goal (which is fixed and known), then the environment teleports the agent to a *random location* (not inside an obstacle).- The performance measure charges the agent 1 point for each unit of distance traversed and awards 1000 points each time the goal is reached.1. Implement this environment and a problem-solving agent for it. After each teleportation, the agent will need to formulate a new problem, which will involve discovering its current location.2. Document your agent’s performance (by having the agent generate suitable commentary as it moves around) and report its performance over 100 episodes.3. Modify the environment so that 30% of the time the agent ends up at an unintended destination (chosen randomly from the other visible vertices if any; otherwise, no move at all). This is a crude model of the motion errors of a real robot. Modify the agent so that when such an error is detected, it finds out where it is and then constructs a plan to get back to where it was and resume the old plan. Remember that sometimes getting back to where it was might also fail! Show an example of the agent successfully overcoming two successive motion errors and still reaching the goal.4. Now try two different recovery schemes after an error: (1) head for the closest vertex on the original route; and (2) replan a route to the goal from the new location. Compare the performance of the three recovery schemes. Would the inclusion of search costs affect the comparison?5. Now suppose that there are locations from which the view is identical. (For example, suppose the world is a grid with square obstacles.) What kind of problem does the agent now face? What do solutions look like? Exercise 13 (online-offline-exercise) Suppose that an agent is in a $3 times 3$maze environment like the one shown inFigure maze-3x3-figure. The agent knows that itsinitial location is (1,1), that the goal is at (3,3), and that theactions *Up*, *Down*, *Left*, *Right* have their usualeffects unless blocked by a wall. The agent does *not* knowwhere the internal walls are. In any given state, the agent perceivesthe set of legal actions; it can also tell whether the state is one ithas visited before.1. Explain how this online search problem can be viewed as an offline search in belief-state space, where the initial belief state includes all possible environment configurations. How large is the initial belief state? How large is the space of belief states?2. How many distinct percepts are possible in the initial state?3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?Notice that this contingency plan is a solution for *everypossible environment* fitting the given description. Therefore,interleaving of search and execution is not strictly necessary even inunknown environments. Exercise 14 (online-offline-exercise) Suppose that an agent is in a $3 times 3$maze environment like the one shown inFigure maze-3x3-figure. The agent knows that itsinitial location is (3,3), that the goal is at (1,1), and that the fouractions *Up*, *Down*, *Left*, *Right* have their usualeffects unless blocked by a wall. The agent does *not* knowwhere the internal walls are. In any given state, the agent perceivesthe set of legal actions; it can also tell whether the state is one ithas visited before or is a new state.1. Explain how this online search problem can be viewed as an offline search in belief-state space, where the initial belief state includes all possible environment configurations. How large is the initial belief state? How large is the space of belief states?2. How many distinct percepts are possible in the initial state?3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?Notice that this contingency plan is a solution for *everypossible environment* fitting the given description. Therefore,interleaving of search and execution is not strictly necessary even inunknown environments. Exercise 15 (path-planning-hc-exercise) In this exercise, we examine hill climbingin the context of robot navigation, using the environment inFigure geometric-scene-figure as an example.1. Repeat Exercise path-planning-agent-exercise using hill climbing. Does your agent ever get stuck in a local minimum? Is it *possible* for it to get stuck with convex obstacles?2. Construct a nonconvex polygonal environment in which the agent gets stuck.3. Modify the hill-climbing algorithm so that, instead of doing a depth-1 search to decide where to go next, it does a depth-$k$ search. It should find the best $k$-step path and do one step along it, and then repeat the process.4. Is there some $k$ for which the new algorithm is guaranteed to escape from local minima?5. Explain how LRTA enables the agent to escape from local minima in this case. Exercise 16 Like DFS, online DFS is incomplete for reversible state spaces withinfinite paths. For example, suppose that states are points on theinfinite two-dimensional grid and actions are unit vectors $(1,0)$,$(0,1)$, $(-1,0)$, $(0,-1)$, tried in that order. Show that online DFSstarting at $(0,0)$ will not reach $(1,-1)$. Suppose the agent canobserve, in addition to its current state, all successor states and theactions that would lead to them. Write an algorithm that is completeeven for bidirected state spaces with infinite paths. What states doesit visit in reaching $(1,-1)$? Exercise 17 Relate the time complexity of LRTA* to its space complexity. Exercise 1 Suppose you have an oracle, $OM(s)$, that correctly predicts theopponent’s move in any state. Using this, formulate the definition of agame as a (single-agent) search problem. Describe an algorithm forfinding the optimal move. Exercise 2 Consider the problem of solving two 8-puzzles.1. Give a complete problem formulation in the style of Chapter search-chapter.2. How large is the reachable state space? Give an exact numerical expression.3. Suppose we make the problem adversarial as follows: the two players take turns moving; a coin is flipped to determine the puzzle on which to make a move in that turn; and the winner is the first to solve one puzzle. Which algorithm can be used to choose a move in this setting?4. Does the game eventually end, given optimal play? Explain.(a) A map where the cost of every edge is 1. Initially the pursuer $P$ is atnode b and the evader $E$ is at node d (b) A partial game tree for this map.Each node is labeled with the $P,E$ positions. $P$ moves first. Branches marked "?" have yet to be explored. Pursuit evasion game Figure Exercise 3 Imagine that, in Exercise two-friends-exercise, one ofthe friends wants to avoid the other. The problem then becomes atwo-player game. We assume now that the players take turns moving. Thegame ends only when the players are on the same node; the terminalpayoff to the pursuer is minus the total time taken. (The evader “wins”by never losing.) An example is shown in Figure.pursuit-evasion-game-figure1. Copy the game tree and mark the values of the terminal nodes.2. Next to each internal node, write the strongest fact you can infer about its value (a number, one or more inequalities such as “$geq 14$”, or a “?”).3. Beneath each question mark, write the name of the node reached by that branch.4. Explain how a bound on the value of the nodes in (c) can be derived from consideration of shortest-path lengths on the map, and derive such bounds for these nodes. Remember the cost to get to each leaf as well as the cost to solve it.5. Now suppose that the tree as given, with the leaf bounds from (d), is evaluated from left to right. Circle those “?” nodes that would not need to be expanded further, given the bounds from part (d), and cross out those that need not be considered at all.6. Can you prove anything in general about who wins the game on a map that is a tree? Exercise 4 (game-playing-chance-exercise) Describe and implement statedescriptions, move generators, terminal tests, utility functions, andevaluation functions for one or more of the following stochastic games:Monopoly, Scrabble, bridge play with a given contract, or Texas hold’empoker. Exercise 5 Describe and implement a real-time,multiplayer game-playing environment, where time is partof the environment state and players are given fixed time allocations. Exercise 6 Discuss how well the standard approach to game playing would apply togames such as tennis, pool, and croquet, which take place in acontinuous physical state space. Exercise 7 (minimax-optimality-exercise) Prove the following assertion: For everygame tree, the utility obtained by max using minimaxdecisions against a suboptimal min will never be lower thanthe utility obtained playing against an optimal min. Canyou come up with a game tree in which max can do stillbetter using a suboptimal strategy against a suboptimalmin?Player $A$ moves first. The two players take turns moving, and eachplayer must move his token to an open adjacent space in eitherdirection. If the opponent occupies an adjacent space, then a playermay jump over the opponent to the next open space if any. (Forexample, if $A$ is on 3 and $B$ is on 2, then $A$ may move back to 1.)The game ends when one player reaches the opposite end of the board.If player $A$ reaches space 4 first, then the value of the game to $A$is $+1$; if player $B$ reaches space 1 first, then the value of thegame to $A$ is $-1$. The starting position of a simple game. Exercise 8 Consider the two-player game described inFigure line-game4-figure1. Draw the complete game tree, using the following conventions: - Write each state as $(s_A,s_B)$, where $s_A$ and $s_B$ denote the token locations. - Put each terminal state in a square box and write its game value in a circle. - Put loop states (states that already appear on the path to the root) in double square boxes. Since their value is unclear, annotate each with a “?” in a circle.2. Now mark each node with its backed-up minimax value (also in a circle). Explain how you handled the “?” values and why.3. Explain why the standard minimax algorithm would fail on this game tree and briefly sketch how you might fix it, drawing on your answer to (b). Does your modified algorithm give optimal decisions for all games with loops?4. This 4-square game can be generalized to $n$ squares for any $n &gt; 2$. Prove that $A$ wins if $n$ is even and loses if $n$ is odd. Exercise 9 This problem exercises the basic concepts of game playing, usingtic-tac-toe (noughts and crosses) as an example. We define$X_n$ as the number of rows, columns, or diagonals with exactly $n$$X$’s and no $O$’s. Similarly, $O_n$ is the number of rows, columns, ordiagonals with just $n$ $O$’s. The utility function assigns $+1$ to anyposition with $X_3=1$ and $-1$ to any position with $O_3 = 1$. All otherterminal positions have utility 0. For nonterminal positions, we use alinear evaluation function defined as ${Eval}(s) = 3X_2(s) + X_1(s) -(3O_2(s) + O_1(s))$. 1. Approximately how many possible games of tic-tac-toe are there?2. Show the whole game tree starting from an empty board down to depth 2 (i.e., one $X$ and one $O$ on the board), taking symmetry into account.3. Mark on your tree the evaluations of all the positions at depth 2.4. Using the minimax algorithm, mark on your tree the backed-up values for the positions at depths 1 and 0, and use those values to choose the best starting move.5. Circle the nodes at depth 2 that would not be evaluated if alpha–beta pruning were applied, assuming the nodes are generated in the optimal order for alpha–beta pruning. Exercise 10 Consider the family of generalized tic-tac-toe games, defined asfollows. Each particular game is specified by a set $mathcal S$ ofsquares and a collection $mathcal W$ of winningpositions. Each winning position is a subset of $mathcal S$.For example, in standard tic-tac-toe, $mathcal S$ is a set of 9 squaresand $mathcal W$ is a collection of 8 subsets of $cal W$: the threerows, the three columns, and the two diagonals. In other respects, thegame is identical to standard tic-tac-toe. Starting from an empty board,players alternate placing their marks on an empty square. A player whomarks every square in a winning position wins the game. It is a tie ifall squares are marked and neither player has won.1. Let $N= |{mathcal S}|$, the number of squares. Give an upper bound on the number of nodes in the complete game tree for generalized tic-tac-toe as a function of $N$.2. Give a lower bound on the size of the game tree for the worst case, where ${mathcal W} = {{,}}$.3. Propose a plausible evaluation function that can be used for any instance of generalized tic-tac-toe. The function may depend on $mathcal S$ and $mathcal W$.4. Assume that it is possible to generate a new board and check whether it is a winning position in 100$N$ machine instructions and assume a 2 gigahertz processor. Ignore memory limitations. Using your estimate in (a), roughly how large a game tree can be completely solved by alpha–beta in a second of CPU time? a minute? an hour? Exercise 11 Develop a general game-playing program, capable of playing a variety ofgames.1. Implement move generators and evaluation functions for one or more of the following games: Kalah, Othello, checkers, and chess.2. Construct a general alpha–beta game-playing agent.3. Compare the effect of increasing search depth, improving move ordering, and improving the evaluation function. How close does your effective branching factor come to the ideal case of perfect move ordering?4. Implement a selective search algorithm, such as B* Berliner:1979, conspiracy number search @McAllester:1988, or MGSS* Russell+Wefald:1989 and compare its performance to A*. Exercise 12 Describe how the minimax and alpha–beta algorithms change fortwo-player, non-zero-sum games in which each player has a distinctutility function and both utility functions are known to both players.If there are no constraints on the two terminal utilities, is itpossible for any node to be pruned by alpha–beta? What if the player’sutility functions on any state differ by at most a constant $k$, makingthe game almost cooperative? Exercise 13 Describe how the minimax and alpha–beta algorithms change fortwo-player, non-zero-sum games in which each player has a distinctutility function and both utility functions are known to both players.If there are no constraints on the two terminal utilities, is itpossible for any node to be pruned by alpha–beta? What if the player’sutility functions on any state sum to a number between constants $-k$and $k$, making the game almost zero-sum? Exercise 14 Develop a formal proof of correctness for alpha–beta pruning. To dothis, consider the situation shown inFigure alpha-beta-proof-figure. The question is whetherto prune node $n_j$, which is a max-node and a descendant of node $n_1$.The basic idea is to prune it if and only if the minimax value of $n_1$can be shown to be independent of the value of $n_j$.1. Mode $n_1$ takes on the minimum value among its children: $n_1 = min(n_2,n_21,ldots,n_{2b_2})$. Find a similar expression for $n_2$ and hence an expression for $n_1$ in terms of $n_j$.2. Let $l_i$ be the minimum (or maximum) value of the nodes to the left of node $n_i$ at depth $i$, whose minimax value is already known. Similarly, let $r_i$ be the minimum (or maximum) value of the unexplored nodes to the right of $n_i$ at depth $i$. Rewrite your expression for $n_1$ in terms of the $l_i$ and $r_i$ values.3. Now reformulate the expression to show that in order to affect $n_1$, $n_j$ must not exceed a certain bound derived from the $l_i$ values.4. Repeat the process for the case where $n_j$ is a min-node. Situation when considering whether to prune node $n_j$. Exercise 15 Prove that the alpha–beta algorithm takes time $O(b^{m/2})$ with optimalmove ordering, where $m$ is the maximum depth of the game tree. Exercise 16 Suppose you have a chess program that can evaluate 5 million nodes persecond. Decide on a compact representation of a game state for storagein a transposition table. About how many entries can you fit in a1-gigabyte in-memory table? Will that be enough for the three minutes ofsearch allocated for one move? How many table lookups can you do in thetime it would take to do one evaluation? Now suppose the transpositiontable is stored on disk. About how many evaluations could you do in thetime it takes to do one disk seek with standard disk hardware? Exercise 17 Suppose you have a chess program that can evaluate 10 million nodes persecond. Decide on a compact representation of a game state for storagein a transposition table. About how many entries can you fit in a2-gigabyte in-memory table? Will that be enough for the three minutes ofsearch allocated for one move? How many table lookups can you do in thetime it would take to do one evaluation? Now suppose the transpositiontable is stored on disk. About how many evaluations could you do in thetime it takes to do one disk seek with standard disk hardware? The complete game tree for a trivial game with chance nodes.. Exercise 18 This question considers pruning in games with chance nodes.Figure trivial-chance-game-figure shows the completegame tree for a trivial game. Assume that the leaf nodes are to beevaluated in left-to-right order, and that before a leaf node isevaluated, we know nothing about its value—the range of possible valuesis $-infty$ to $infty$.1. Copy the figure, mark the value of all the internal nodes, and indicate the best move at the root with an arrow.2. Given the values of the first six leaves, do we need to evaluate the seventh and eighth leaves? Given the values of the first seven leaves, do we need to evaluate the eighth leaf? Explain your answers.3. Suppose the leaf node values are known to lie between –2 and 2 inclusive. After the first two leaves are evaluated, what is the value range for the left-hand chance node?4. Circle all the leaves that need not be evaluated under the assumption in (c). Exercise 19 Implement the expectiminimax algorithm and the *-alpha–beta algorithm,which is described by Ballard:1983, for pruning game trees with chance nodes. Trythem on a game such as backgammon and measure the pruning effectivenessof *-alpha–beta. Exercise 20 (game-linear-transform) Prove that with a positive lineartransformation of leaf values (i.e., transforming a value $x$ to$ax + b$ where $a &gt; 0$), the choice of move remains unchanged in a gametree, even when there are chance nodes. Exercise 21 (game-playing-monte-carlo-exercise) Consider the following procedurefor choosing moves in games with chance nodes:- Generate some dice-roll sequences (say, 50) down to a suitable depth (say, 8).- With known dice rolls, the game tree becomes deterministic. For each dice-roll sequence, solve the resulting deterministic game tree using alpha–beta.- Use the results to estimate the value of each move and to choose the best.Will this procedure work well? Why (or why not)? Exercise 22 In the following, a “max” tree consists only of max nodes, whereas an“expectimax” tree consists of a max node at the root with alternatinglayers of chance and max nodes. At chance nodes, all outcomeprobabilities are nonzero. The goal is to find the value of theroot with a bounded-depth search. For each of (a)–(f), eithergive an example or explain why this is impossible.1. Assuming that leaf values are finite but unbounded, is pruning (as in alpha–beta) ever possible in a max tree?2. Is pruning ever possible in an expectimax tree under the same conditions?3. If leaf values are all nonnegative, is pruning ever possible in a max tree? Give an example, or explain why not.4. If leaf values are all nonnegative, is pruning ever possible in an expectimax tree? Give an example, or explain why not.5. If leaf values are all in the range $[0,1]$, is pruning ever possible in a max tree? Give an example, or explain why not.6. If leaf values are all in the range $[0,1]$, is pruning ever possible in an expectimax tree?17. Consider the outcomes of a chance node in an expectimax tree. Which of the following evaluation orders is most likely to yield pruning opportunities? i. Lowest probability first ii. Highest probability first iii. Doesn’t make any difference Exercise 23 In the following, a “max” tree consists only of max nodes, whereas an“expectimax” tree consists of a max node at the root with alternatinglayers of chance and max nodes. At chance nodes, all outcomeprobabilities are nonzero. The goal is to find the value of theroot with a bounded-depth search.1. Assuming that leaf values are finite but unbounded, is pruning (as in alpha–beta) ever possible in a max tree? Give an example, or explain why not.2. Is pruning ever possible in an expectimax tree under the same conditions? Give an example, or explain why not.3. If leaf values are constrained to be in the range $[0,1]$, is pruning ever possible in a max tree? Give an example, or explain why not.4. If leaf values are constrained to be in the range $[0,1]$, is pruning ever possible in an expectimax tree? Give an example (qualitatively different from your example in (e), if any), or explain why not.5. If leaf values are constrained to be nonnegative, is pruning ever possible in a max tree? Give an example, or explain why not.6. If leaf values are constrained to be nonnegative, is pruning ever possible in an expectimax tree? Give an example, or explain why not.7. Consider the outcomes of a chance node in an expectimax tree. Which of the following evaluation orders is most likely to yield pruning opportunities: (i) Lowest probability first; (ii) Highest probability first; (iii) Doesn’t make any difference? Exercise 24 Suppose you have an oracle, $OM(s)$, that correctly predicts theopponent’s move in any state. Using this, formulate the definition of agame as a (single-agent) search problem. Describe an algorithm forfinding the optimal move. Exercise 25 Consider carefully the interplay of chance events and partialinformation in each of the games inExercise game-playing-chance-exercise.1. For which is the standard expectiminimax model appropriate? Implement the algorithm and run it in your game-playing agent, with appropriate modifications to the game-playing environment.2. For which would the scheme described in Exercise game-playing-monte-carlo-exercise be appropriate?3. Discuss how you might deal with the fact that in some of the games, the players do not have the same knowledge of the current state. Exercise 1 How many solutions are there for the map-coloring problem inFigure australia-figure? How many solutions if fourcolors are allowed? Two colors? Exercise 2 Consider the problem of placing $k$ knights on an $ntimes n$chessboard such that no two knights are attacking each other, where $k$is given and $kleq n^2$.1. Choose a CSP formulation. In your formulation, what are the variables?2. What are the possible values of each variable?3. What sets of variables are constrained, and how?4. Now consider the problem of putting *as many knights as possible* on the board without any attacks. Explain how to solve this with local search by defining appropriate ACTIONS and RESULT functions and a sensible objective function. Exercise 3 (crossword-exercise) Consider the problem of constructing (not solving)crossword puzzles fitting words into a rectangular grid. The grid,which is given as part of the problem, specifies which squares are blankand which are shaded. Assume that a list of words (i.e., a dictionary)is provided and that the task is to fill in the blank squares by usingany subset of the list. Formulate this problem precisely in two ways:1. As a general search problem. Choose an appropriate search algorithm and specify a heuristic function. Is it better to fill in blanks one letter at a time or one word at a time?2. As a constraint satisfaction problem. Should the variables be words or letters?Which formulation do you think will be better? Why? Exercise 4 (csp-definition-exercise) Give precise formulations for each of thefollowing as constraint satisfaction problems:1. Rectilinear floor-planning: find non-overlapping places in a large rectangle for a number of smaller rectangles.2. Class scheduling: There is a fixed number of professors and classrooms, a list of classes to be offered, and a list of possible time slots for classes. Each professor has a set of classes that he or she can teach.3. Hamiltonian tour: given a network of cities connected by roads, choose an order to visit all cities in a country without repeating any. Exercise 5 Solve the cryptarithmetic problem inFigure cryptarithmetic-figure by hand, using thestrategy of backtracking with forward checking and the MRV andleast-constraining-value heuristics. Exercise 6 (nary-csp-exercise) Show how a single ternary constraint such as“$A + B = C$” can be turned into three binary constraints by using anauxiliary variable. You may assume finite domains. (*Hint:*Consider a new variable that takes on values that are pairs of othervalues, and consider constraints such as “$X$ is the first element ofthe pair $Y$.”) Next, show how constraints with more than threevariables can be treated similarly. Finally, show how unary constraintscan be eliminated by altering the domains of variables. This completesthe demonstration that any CSP can be transformed into a CSP with onlybinary constraints. Exercise 7 (zebra-exercise) Consider the following logic puzzle: In five houses,each with a different color, live five persons of differentnationalities, each of whom prefers a different brand of candy, adifferent drink, and a different pet. Given the following facts, thequestions to answer are “Where does the zebra live, and in which housedo they drink water?”The Englishman lives in the red house.The Spaniard owns the dog.The Norwegian lives in the first house on the left.The green house is immediately to the right of the ivory house.The man who eats Hershey bars lives in the house next to the man withthe fox.Kit Kats are eaten in the yellow house.The Norwegian lives next to the blue house.The Smarties eater owns snails.The Snickers eater drinks orange juice.The Ukrainian drinks tea.The Japanese eats Milky Ways.Kit Kats are eaten in a house next to the house where the horse is kept.Coffee is drunk in the green house.Milk is drunk in the middle house.Discuss different representations of this problem as a CSP. Why wouldone prefer one representation over another? Exercise 8 Consider the graph with 8 nodes $A_1$, $A_2$, $A_3$, $A_4$, $H$, $T$,$F_1$, $F_2$. $A_i$ is connected to $A_{i+1}$ for all $i$, each $A_i$ isconnected to $H$, $H$ is connected to $T$, and $T$ is connected to each$F_i$. Find a 3-coloring of this graph by hand using the followingstrategy: backtracking with conflict-directed backjumping, the variableorder $A_1$, $H$, $A_4$, $F_1$, $A_2$, $F_2$, $A_3$, $T$, and the valueorder $R$, $G$, $B$. Exercise 9 Explain why it is a good heuristic to choose the variable that is*most* constrained but the value that is*least* constraining in a CSP search. Exercise 10 Generate random instances of map-coloring problems as follows: scatter$n$ points on the unit square; select a point $X$ at random, connect $X$by a straight line to the nearest point $Y$ such that $X$ is not alreadyconnected to $Y$ and the line crosses no other line; repeat the previousstep until no more connections are possible. The points representregions on the map and the lines connect neighbors. Now try to find$k$-colorings of each map, for both $k3$ and$k4$, using min-conflicts, backtracking, backtracking withforward checking, and backtracking with MAC. Construct a table ofaverage run times for each algorithm for values of $n$ up to the largestyou can manage. Comment on your results. Exercise 11 Use the AC-3 algorithm to show that arc consistency can detect theinconsistency of the partial assignment${green},V{red}$ for the problemshown in Figure australia-figure. Exercise 12 Use the AC-3 algorithm to show that arc consistency can detect theinconsistency of the partial assignment${red},V{blue}$ for the problemshown in Figure australia-figure. Exercise 13 What is the worst-case complexity of running AC-3 on a tree-structuredCSP? Exercise 14 (ac4-exercise) AC-3 puts back on the queue *every* arc($X_{k}, X_{i}$) whenever *any* value is deleted from thedomain of $X_{i}$, even if each value of $X_{k}$ is consistent withseveral remaining values of $X_{i}$. Suppose that, for every arc($X_{k}, X_{i}$), we keep track of the number of remaining values of$X_{i}$ that are consistent with each value of $X_{k}$. Explain how toupdate these numbers efficiently and hence show that arc consistency canbe enforced in total time $O(n^2d^2)$. Exercise 15 The Tree-CSP-Solver (Figure tree-csp-figure) makes arcs consistentstarting at the leaves and working backwards towards the root. Why doesit do that? What would happen if it went in the opposite direction? Exercise 16 We introduced Sudoku as a CSP to be solved by search over partialassignments because that is the way people generally undertake solvingSudoku problems. It is also possible, of course, to attack theseproblems with local search over complete assignments. How well would alocal solver using the min-conflicts heuristic do on Sudoku problems? Exercise 17 Define in your own words the terms constraint, backtracking search, arcconsistency, backjumping, min-conflicts, and cycle cutset. Exercise 18 Define in your own words the terms constraint, commutativity, arcconsistency, backjumping, min-conflicts, and cycle cutset. Exercise 19 Suppose that a graph is known to have a cycle cutset of no more than $k$nodes. Describe a simple algorithm for finding a minimal cycle cutsetwhose run time is not much more than $O(n^k)$ for a CSP with $n$variables. Search the literature for methods for finding approximatelyminimal cycle cutsets in time that is polynomial in the size of thecutset. Does the existence of such algorithms make the cycle cutsetmethod practical? Exercise 20 Consider the problem of tiling a surface (completely and exactlycovering it) with $n$ dominoes ($2times1$ rectangles). The surface is an arbitrary edge-connected (i.e.,adjacent along an edge, not just a corner) collection of $2n$$1times 1$ squares (e.g., a checkerboard, a checkerboard with somesquares missing, a $10times 1$ row of squares, etc.).1. Formulate this problem precisely as a CSP where the dominoes are the variables.2. Formulate this problem precisely as a CSP where the squares are the variables, keeping the state space as small as possible. (*Hint:* does it matter which particular domino goes on a given pair of squares?)3. Construct a surface consisting of 6 squares such that your CSP formulation from part (b) has a *tree-structured* constraint graph.4. Describe exactly the set of solvable instances that have a tree-structured constraint graph. Exercise 1 Suppose the agent has progressed to the point shown inFigure wumpus-seq35-figure(a), page wumpus-seq35-figure,having perceived nothing in [1,1], a breeze in [2,1], and a stenchin [1,2], and is now concerned with the contents of [1,3], [2,2],and [3,1]. Each of these can contain a pit, and at most one cancontain a wumpus. Following the example ofFigure wumpus-entailment-figure, construct the set ofpossible worlds. (You should find 32 of them.) Mark the worlds in whichthe KB is true and those in which each of the following sentences istrue:$alpha_2$ = “There is no pit in [2,2].”$alpha_3$ = “There is a wumpus in [1,3].”Hence show that ${KB} {models}alpha_2$ and${KB} {models}alpha_3$. Exercise 2 (Adapted from Barwise+Etchemendy:1993 .) Given the following, can you prove that the unicorn ismythical? How about magical? Horned?Note: If the unicorn is mythical, then it is immortal, but if it is not mythical, then it is a mortal mammal. If the unicorn is either immortal or a mammal, then it is horned. The unicorn is magical if it is horned. Exercise 3 (truth-value-exercise) Consider the problem of deciding whether apropositional logic sentence is true in a given model.1. Write a recursive algorithm PL-True?$ (s, m )$ that returns ${true}$ if and only if the sentence $s$ is true in the model $m$ (where $m$ assigns a truth value for every symbol in $s$). The algorithm should run in time linear in the size of the sentence. (Alternatively, use a version of this function from the online code repository.)2. Give three examples of sentences that can be determined to be true or false in a partial model that does not specify a truth value for some of the symbols.3. Show that the truth value (if any) of a sentence in a partial model cannot be determined efficiently in general.4. Modify your algorithm so that it can sometimes judge truth from partial models, while retaining its recursive structure and linear run time. Give three examples of sentences whose truth in a partial model is not detected by your algorithm.5. Investigate whether the modified algorithm makes $TT-Entails?$ more efficient. Exercise 4 Which of the following are correct?1. ${False} models {True}$.2. ${True} models {False}$.3. $(Aland B) models (A{;;{Leftrightarrow};;}B)$.4. $A{;;{Leftrightarrow};;}B models A lor B$.5. $A{;;{Leftrightarrow};;}B models lnot A lor B$.6. $(Aland B){:;{Rightarrow}:;}C models (A{:;{Rightarrow}:;}C)lor(B{:;{Rightarrow}:;}C)$.7. $(Clor (lnot A land lnot B)) equiv ((A{:;{Rightarrow}:;}C) land (B {:;{Rightarrow}:;}C))$.8. $(Alor B) land (lnot Clorlnot Dlor E) models (Alor B)$.9. $(Alor B) land (lnot Clorlnot Dlor E) models (Alor B) land (lnot Dlor E)$.10. $(Alor B) land lnot(A {:;{Rightarrow}:;}B)$ is satisfiable.11. $(A{;;{Leftrightarrow};;}B) land (lnot A lor B)$ is satisfiable.12. $(A{;;{Leftrightarrow};;}B) {;;{Leftrightarrow};;}C$ has the same number of models as $(A{;;{Leftrightarrow};;}B)$ for any fixed set of proposition symbols that includes $A$, $B$, $C$. Exercise 5 Which of the following are correct?1. ${False} models {True}$.2. ${True} models {False}$.3. $(Aland B) models (A{;;{Leftrightarrow};;}B)$.4. $A{;;{Leftrightarrow};;}B models A lor B$.5. $A{;;{Leftrightarrow};;}B models lnot A lor B$.6. $(Alor B) land (lnot Clorlnot Dlor E) models (Alor Blor C) land (Bland Cland D{:;{Rightarrow}:;}E)$.7. $(Alor B) land (lnot Clorlnot Dlor E) models (Alor B) land (lnot Dlor E)$.8. $(Alor B) land lnot(A {:;{Rightarrow}:;}B)$ is satisfiable.9. $(Aland B){:;{Rightarrow}:;}C models (A{:;{Rightarrow}:;}C)lor(B{:;{Rightarrow}:;}C)$.10. $(Clor (lnot A land lnot B)) equiv ((A{:;{Rightarrow}:;}C) land (B {:;{Rightarrow}:;}C))$.11. $(A{;;{Leftrightarrow};;}B) land (lnot A lor B)$ is satisfiable.12. $(A{;;{Leftrightarrow};;}B) {;;{Leftrightarrow};;}C$ has the same number of models as $(A{;;{Leftrightarrow};;}B)$ for any fixed set of proposition symbols that includes $A$, $B$, $C$. Exercise 6 (deduction-theorem-exercise) Prove each of the following assertions:1. $alpha$ is valid if and only if ${True}{models}alpha$.2. For any $alpha$, ${False}{models}alpha$.3. $alpha{models}beta$ if and only if the sentence $(alpha {:;{Rightarrow}:;}beta)$ is valid.4. $alpha equiv beta$ if and only if the sentence $(alpha{;;{Leftrightarrow};;}beta)$ is valid.5. $alpha{models}beta$ if and only if the sentence $(alpha land lnot beta)$ is unsatisfiable. Exercise 7 Prove, or find a counterexample to, each of the following assertions:1. If $alphamodelsgamma$ or $betamodelsgamma$ (or both) then $(alphaland beta)modelsgamma$2. If $(alphaland beta)modelsgamma$ then $alphamodelsgamma$ or $betamodelsgamma$ (or both).3. If $alphamodels (beta lor gamma)$ then $alpha models beta$ or $alpha models gamma$ (or both). Exercise 8 Prove, or find a counterexample to, each of the following assertions:1. If $alphamodelsgamma$ or $betamodelsgamma$ (or both) then $(alphaland beta)modelsgamma$2. If $alphamodels (beta land gamma)$ then $alpha models beta$ and $alpha models gamma$.3. If $alphamodels (beta lor gamma)$ then $alpha models beta$ or $alpha models gamma$ (or both). Exercise 9 Consider a vocabulary with only four propositions, $A$, $B$, $C$, and$D$. How many models are there for the following sentences?1. $Blor C$.2. $lnot Alor lnot B lor lnot C lor lnot D$.3. $(A{:;{Rightarrow}:;}B) land A land lnot B land C land D$. Exercise 10 We have defined four binary logical connectives.1. Are there any others that might be useful?2. How many binary connectives can there be?3. Why are some of them not very useful? Exercise 11 (logical-equivalence-exercise) Using a method of your choice, verifyeach of the equivalences inTable logical-equivalence-table (page logical-equivalence-table). Exercise 12 (propositional-validity-exercise) Decide whether each of the followingsentences is valid, unsatisfiable, or neither. Verify your decisionsusing truth tables or the equivalence rules ofTable logical-equivalence-table (page logical-equivalence-table).1. ${Smoke} {:;{Rightarrow}:;}{Smoke}$2. ${Smoke} {:;{Rightarrow}:;}{Fire}$3. $({Smoke} {:;{Rightarrow}:;}{Fire}) {:;{Rightarrow}:;}(lnot {Smoke} {:;{Rightarrow}:;}lnot {Fire})$4. ${Smoke} lor {Fire} lor lnot {Fire}$5. $(({Smoke} land {Heat}) {:;{Rightarrow}:;}{Fire}) {;;{Leftrightarrow};;}(({Smoke} {:;{Rightarrow}:;}{Fire}) lor ({Heat} {:;{Rightarrow}:;}{Fire}))$6. $({Smoke} {:;{Rightarrow}:;}{Fire}) {:;{Rightarrow}:;}(({Smoke} land {Heat}) {:;{Rightarrow}:;}{Fire}) $7. ${Big} lor {Dumb} lor ({Big} {:;{Rightarrow}:;}{Dumb})$ Exercise 13 (propositional-validity-exercise) Decide whether each of the followingsentences is valid, unsatisfiable, or neither. Verify your decisionsusing truth tables or the equivalence rules ofTable logical-equivalence-table (page logical-equivalence-table).1. ${Smoke} {:;{Rightarrow}:;}{Smoke}$2. ${Smoke} {:;{Rightarrow}:;}{Fire}$3. $({Smoke} {:;{Rightarrow}:;}{Fire}) {:;{Rightarrow}:;}(lnot {Smoke} {:;{Rightarrow}:;}lnot {Fire})$4. ${Smoke} lor {Fire} lor lnot {Fire}$5. $(({Smoke} land {Heat}) {:;{Rightarrow}:;}{Fire}) {;;{Leftrightarrow};;}(({Smoke} {:;{Rightarrow}:;}{Fire}) lor ({Heat} {:;{Rightarrow}:;}{Fire}))$6. ${Big} lor {Dumb} lor ({Big} {:;{Rightarrow}:;}{Dumb})$7. $({Big} land {Dumb}) lor lnot {Dumb}$ Exercise 14 (cnf-proof-exercise) Any propositional logic sentence is logicallyequivalent to the assertion that each possible world in which it wouldbe false is not the case. From this observation, prove that any sentencecan be written in CNF. Exercise 15 Use resolution to prove the sentence $lnot A land lnot B$ from theclauses in Exercise convert-clausal-exercise. Exercise 16 (inf-exercise) This exercise looks into the relationship betweenclauses and implication sentences.1. Show that the clause $(lnot P_1 lor cdots lor lnot P_m lor Q)$ is logically equivalent to the implication sentence $(P_1 land cdots land P_m) {;{Rightarrow};}Q$.2. Show that every clause (regardless of the number of positive literals) can be written in the form $(P_1 land cdots land P_m) {;{Rightarrow};}(Q_1 lor cdots lor Q_n)$, where the $P$s and $Q$s are proposition symbols. A knowledge base consisting of such sentences is in implicative normal form or Kowalski form Kowalski:1979.3. Write down the full resolution rule for sentences in implicative normal form. Exercise 17 According to some political pundits, a person who is radical ($R$) iselectable ($E$) if he/she is conservative ($C$), but otherwise is notelectable.1. Which of the following are correct representations of this assertion? 1. $(Rland E)iff C$ 2. $R{:;{Rightarrow}:;}(Eiff C)$ 3. $R{:;{Rightarrow}:;}((C{:;{Rightarrow}:;}E) lor lnot E)$2. Which of the sentences in (a) can be expressed in Horn form? Exercise 18 This question considers representing satisfiability (SAT) problems asCSPs.1. Draw the constraint graph corresponding to the SAT problem $$(lnot X_1 lor X_2) land (lnot X_2 lor X_3) land ldots land (lnot X_{n-1} lor X_n)$$ for the particular case $n5$.2. How many solutions are there for this general SAT problem as a function of $n$?3. Suppose we apply {Backtracking-Search} (page backtracking-search-algorithm) to find all solutions to a SAT CSP of the type given in (a). (To find all solutions to a CSP, we simply modify the basic algorithm so it continues searching after each solution is found.) Assume that variables are ordered $X_1,ldots,X_n$ and ${false}$ is ordered before ${true}$. How much time will the algorithm take to terminate? (Write an $O(cdot)$ expression as a function of $n$.)4. We know that SAT problems in Horn form can be solved in linear time by forward chaining (unit propagation). We also know that every tree-structured binary CSP with discrete, finite domains can be solved in time linear in the number of variables (Section csp-structure-section). Are these two facts connected? Discuss. Exercise 19 This question considers representing satisfiability (SAT) problems asCSPs.1. Draw the constraint graph corresponding to the SAT problem $$(lnot X_1 lor X_2) land (lnot X_2 lor X_3) land ldots land (lnot X_{n-1} lor X_n)$$ for the particular case $n4$.2. How many solutions are there for this general SAT problem as a function of $n$?3. Suppose we apply {Backtracking-Search} (page backtracking-search-algorithm) to find all solutions to a SAT CSP of the type given in (a). (To find all solutions to a CSP, we simply modify the basic algorithm so it continues searching after each solution is found.) Assume that variables are ordered $X_1,ldots,X_n$ and ${false}$ is ordered before ${true}$. How much time will the algorithm take to terminate? (Write an $O(cdot)$ expression as a function of $n$.)4. We know that SAT problems in Horn form can be solved in linear time by forward chaining (unit propagation). We also know that every tree-structured binary CSP with discrete, finite domains can be solved in time linear in the number of variables (Section csp-structure-section). Are these two facts connected? Discuss. Exercise 20 Explain why every nonempty propositional clause, by itself, issatisfiable. Prove rigorously that every set of five 3-SAT clauses issatisfiable, provided that each clause mentions exactly three distinctvariables. What is the smallest set of such clauses that isunsatisfiable? Construct such a set. Exercise 21 A propositional 2-CNF expression is a conjunction ofclauses, each containing exactly 2 literals, e.g.,$$(Alor B) land (lnot A lor C) land (lnot B lor D) land (lnot C lor G) land (lnot D lor G) .$$1. Prove using resolution that the above sentence entails $G$.2. Two clauses are semantically distinct if they are not logically equivalent. How many semantically distinct 2-CNF clauses can be constructed from $n$ proposition symbols?3. Using your answer to (b), prove that propositional resolution always terminates in time polynomial in $n$ given a 2-CNF sentence containing no more than $n$ distinct symbols.4. Explain why your argument in (c) does not apply to 3-CNF. Exercise 22 Prove each of the following assertions:1. Every pair of propositional clauses either has no resolvents, or all their resolvents are logically equivalent.2. There is no clause that, when resolved with itself, yields (after factoring) the clause $(lnot P lor lnot Q)$.3. If a propositional clause $C$ can be resolved with a copy of itself, it must be logically equivalent to $ True $. Exercise 23 Consider the following sentence:$$[ ({Food} {:;{Rightarrow}:;}{Party}) lor ({Drinks} {:;{Rightarrow}:;}{Party}) ] {:;{Rightarrow}:;}[ ( {Food} land {Drinks} ) {:;{Rightarrow}:;}{Party}] .$$1. Determine, using enumeration, whether this sentence is valid, satisfiable (but not valid), or unsatisfiable.2. Convert the left-hand and right-hand sides of the main implication into CNF, showing each step, and explain how the results confirm your answer to (a).3. Prove your answer to (a) using resolution. Exercise 24 (dnf-exercise) A sentence is in disjunctive normal form(DNF) if it is the disjunction ofconjunctions of literals. For example, the sentence$(A land B land lnot C) lor (lnot A land C) lor (B land lnot C)$is in DNF.1. Any propositional logic sentence is logically equivalent to the assertion that some possible world in which it would be true is in fact the case. From this observation, prove that any sentence can be written in DNF.2. Construct an algorithm that converts any sentence in propositional logic into DNF. (Hint: The algorithm is similar to the algorithm for conversion to CNF iven in Sectio pl-resolution-section.)3. Construct a simple algorithm that takes as input a sentence in DNF and returns a satisfying assignment if one exists, or reports that no satisfying assignment exists.4. Apply the algorithms in (b) and (c) to the following set of sentences: $A {Rightarrow} B$ $B {Rightarrow} C$ $C {Rightarrow} A$5. Since the algorithm in (b) is very similar to the algorithm for conversion to CNF, and since the algorithm in (c) is much simpler than any algorithm for solving a set of sentences in CNF, why is this technique not used in automated reasoning? Exercise 25 (convert-clausal-exercise) Convert the following set of sentences toclausal form.1. S1: $A {;;{Leftrightarrow};;}(B lor E)$.2. S2: $E {:;{Rightarrow}:;}D$.3. S3: $C land F {:;{Rightarrow}:;}lnot B$.4. S4: $E {:;{Rightarrow}:;}B$.5. S5: $B {:;{Rightarrow}:;}F$.6. S6: $B {:;{Rightarrow}:;}C$Give a trace of the execution of DPLL on the conjunction of theseclauses. Exercise 26 (convert-clausal-exercise) Convert the following set of sentences toclausal form.1. S1: $A {;;{Leftrightarrow};;}(B lor E)$.2. S2: $E {:;{Rightarrow}:;}D$.3. S3: $C land F {:;{Rightarrow}:;}lnot B$.4. S4: $E {:;{Rightarrow}:;}B$.5. S5: $B {:;{Rightarrow}:;}F$.6. S6: $B {:;{Rightarrow}:;}C$Give a trace of the execution of DPLL on the conjunction of theseclauses. Exercise 27 Is a randomly generated 4-CNF sentence with $n$ symbols and $m$ clausesmore or less likely to be solvable than a randomly generated 3-CNFsentence with $n$ symbols and $m$ clauses? Explain. Exercise 28 Minesweeper, the well-known computer game, isclosely related to the wumpus world. A minesweeper world isa rectangular grid of $N$ squares with $M$ invisible mines scatteredamong them. Any square may be probed by the agent; instant death followsif a mine is probed. Minesweeper indicates the presence of mines byrevealing, in each probed square, the number of minesthat are directly or diagonally adjacent. The goal is to probe everyunmined square.1. Let $X_{i,j}$ be true iff square $[i,j]$ contains a mine. Write down the assertion that exactly two mines are adjacent to [1,1] as a sentence involving some logical combination of $X_{i,j}$ propositions.2. Generalize your assertion from (a) by explaining how to construct a CNF sentence asserting that $k$ of $n$ neighbors contain mines.3. Explain precisely how an agent can use {DPLL} to prove that a given square does (or does not) contain a mine, ignoring the global constraint that there are exactly $M$ mines in all.4. Suppose that the global constraint is constructed from your method from part (b). How does the number of clauses depend on $M$ and $N$? Suggest a way to modify {DPLL} so that the global constraint does not need to be represented explicitly.5. Are any conclusions derived by the method in part (c) invalidated when the global constraint is taken into account?6. Give examples of configurations of probe values that induce long-range dependencies such that the contents of a given unprobed square would give information about the contents of a far-distant square. (Hint: consider an $Ntimes 1$ board.) Exercise 29 (known-literal-exercise) How long does it take to prove${KB}{models}alpha$ using {DPLL} when $alpha$ is a literal alreadycontained in ${KB}$? Explain. Exercise 30 (dpll-fc-exercise) Trace the behavior of {DPLL} on the knowledge base inFigure pl-horn-example-figure when trying to prove $Q$,and compare this behavior with that of the forward-chaining algorithm. Exercise 31 Write a successor-state axiom for the ${Locked}$ predicate, whichapplies to doors, assuming the only actions available are ${Lock}$ and${Unlock}$. Exercise 32 Discuss what is meant by optimal behavior in the wumpusworld. Show that the {Hybrid-Wumpus-Agent} is not optimal, and suggest ways to improve it. Exercise 33 Suppose an agent inhabits a world with two states, $S$ and $lnot S$,and can do exactly one of two actions, $a$ and $b$. Action $a$ doesnothing and action $b$ flips from one state to the other. Let $S^t$ bethe proposition that the agent is in state $S$ at time $t$, and let$a^t$ be the proposition that the agent does action $a$ at time $t$(similarly for $b^t$).1. Write a successor-state axiom for $S^{t+1}$.2. Convert the sentence in (a) into CNF.3. Show a resolution refutation proof that if the agent is in $lnot S$ at time $t$ and does $a$, it will still be in $lnot S$ at time $t+1$. Exercise 34 (ss-axiom-exercise) Section successor-state-sectionprovides some of the successor-state axioms required for the wumpusworld. Write down axioms for all remaining fluent symbols. Exercise 35 (hybrid-wumpus-exercise) Modify the {Hybrid-Wumpus-Agent} to use the 1-CNF logical stateestimation method described on page 1cnf-belief-state-page. We noted on that pagethat such an agent will not be able to acquire, maintain, and use morecomplex beliefs such as the disjunction $P_{3,1}lor P_{2,2}$. Suggest amethod for overcoming this problem by defining additional propositionsymbols, and try it out in the wumpus world. Does it improve theperformance of the agent? Exercise 1 A logical knowledge base represents the world using a set of sentenceswith no explicit structure. An analogicalrepresentation, on the other hand, has physical structure thatcorresponds directly to the structure of the thing represented. Considera road map of your country as an analogical representation of factsabout the country—it represents facts with a map language. Thetwo-dimensional structure of the map corresponds to the two-dimensionalsurface of the area.1. Give five examples of *symbols* in the map language.2. An *explicit* sentence is a sentence that the creator of the representation actually writes down. An *implicit* sentence is a sentence that results from explicit sentences because of properties of the analogical representation. Give three examples each of *implicit* and *explicit* sentences in the map language.3. Give three examples of facts about the physical structure of your country that cannot be represented in the map language.4. Give two examples of facts that are much easier to express in the map language than in first-order logic.5. Give two other examples of useful analogical representations. What are the advantages and disadvantages of each of these languages? Exercise 2 Consider a knowledge base containing just two sentences: $P(a)$ and$P(b)$. Does this knowledge base entail $forall,x P(x)$? Explain youranswer in terms of models. Exercise 3 Is the sentence ${exists,x,y;;} xy$ valid? Explain. Exercise 4 Write down a logical sentence such that every world in which it is truecontains exactly one object. Exercise 5 (two-friends-exercise) Write down a logical sentence such that every world in which it is truecontains exactly two objects. Exercise 6 (8puzzle-parity-exercise) Consider a symbol vocabulary that contains$c$ constant symbols, $p_k$ predicate symbols of each arity $k$, and$f_k$ function symbols of each arity $k$, where $1leq kleq A$. Let thedomain size be fixed at $D$. For any given model, each predicate orfunction symbol is mapped onto a relation or function, respectively, ofthe same arity. You may assume that the functions in the model allowsome input tuples to have no value for the function (i.e., the value isthe invisible object). Derive a formula for the number of possiblemodels for a domain with $D$ elements. Don’t worry about eliminatingredundant combinations. Exercise 7 (nqueens-size-exercise) Which of the following are valid (necessarily true) sentences?1. $(exists x xx) {:;{Rightarrow}:;}({forall,y;;} exists z yz)$. 2. ${forall,x;;} P(x) lor lnot P(x)$.3. ${forall,x;;} {Smart}(x) lor (xx)$. Exercise 8 (empty-universe-exercise) Consider a version of the semantics forfirst-order logic in which models with empty domains are allowed. Giveat least two examples of sentences that are valid according to thestandard semantics but not according to the new semantics. Discuss whichoutcome makes more intuitive sense for your examples. Exercise 9 (hillary-exercise) Does the fact$lnot {Spouse}({George},{Laura})$ follow from the facts${Jim}neq {George}$ and ${Spouse}({Jim},{Laura})$? If so,give a proof; if not, supply additional axioms as needed. What happensif we use ${Spouse}$ as a unary function symbol instead of a binarypredicate? Exercise 10 This exercise uses the function ${MapColor}$ and predicates${In}(x,y)$, ${Borders}(x,y)$, and ${Country}(x)$, whose argumentsare geographical regions, along with constant symbols for variousregions. In each of the following we give an English sentence and anumber of candidate logical expressions. For each of the logicalexpressions, state whether it (1) correctly expresses the Englishsentence; (2) is syntactically invalid and therefore meaningless; or (3)is syntactically valid but does not express the meaning of the Englishsentence.1. Paris and Marseilles are both in France. 1. ${In}({Paris} land {Marseilles}, {France})$. 2. ${In}({Paris},{France}) land {In}({Marseilles},{France})$. 3. ${In}({Paris},{France}) lor {In}({Marseilles},{France})$.2. There is a country that borders both Iraq and Pakistan. 1. ${exists,c;;}$ ${Country}(c) land {Border}(c,{Iraq}) land {Border}(c,{Pakistan})$. 2. ${exists,c;;}$ ${Country}(c) {:;{Rightarrow}:;}[{Border}(c,{Iraq}) land {Border}(c,{Pakistan})]$. 3. $[{exists,c;;}$ ${Country}(c)] {:;{Rightarrow}:;}[{Border}(c,{Iraq}) land {Border}(c,{Pakistan})]$. 4. ${exists,c;;}$ ${Border}({Country}(c),{Iraq} land {Pakistan})$.3. All countries that border Ecuador are in South America. 1. ${forall,c;;} Country(c) land {Border}(c,{Ecuador}) {:;{Rightarrow}:;}{In}(c,{SouthAmerica})$. 2. ${forall,c;;} {Country}(c) {:;{Rightarrow}:;}[{Border}(c,{Ecuador}) {:;{Rightarrow}:;}{In}(c,{SouthAmerica})]$. 3. ${forall,c;;} [{Country}(c) {:;{Rightarrow}:;}{Border}(c,{Ecuador})] {:;{Rightarrow}:;}{In}(c,{SouthAmerica})$. 4. ${forall,c;;} Country(c) land {Border}(c,{Ecuador}) land {In}(c,{SouthAmerica})$.4. No region in South America borders any region in Europe. 1. $lnot [{exists,c,d;;} {In}(c,{SouthAmerica}) land {In}(d,{Europe}) land {Borders}(c,d)]$. 2. ${forall,c,d;;} [{In}(c,{SouthAmerica}) land {In}(d,{Europe})] {:;{Rightarrow}:;}lnot {Borders}(c,d)]$. 3. $lnot {forall,c;;} {In}(c,{SouthAmerica}) {:;{Rightarrow}:;}{exists,d;;} {In}(d,{Europe}) land lnot {Borders}(c,d)$. 4. ${forall,c;;} {In}(c,{SouthAmerica}) {:;{Rightarrow}:;}{forall,d;;} {In}(d,{Europe}) {:;{Rightarrow}:;}lnot {Borders}(c,d)$.5. No two adjacent countries have the same map color. 1. ${forall,x,y;;} lnot {Country}(x) lor lnot {Country}(y) lor lnot {Borders}(x,y) lor {}$ $lnot ({MapColor}(x) = {MapColor}(y))$. 2. ${forall,x,y;;} ({Country}(x) land {Country}(y) land {Borders}(x,y) land lnot(x=y)) {:;{Rightarrow}:;}{}$ $lnot ({MapColor}(x) = {MapColor}(y))$. 3. ${forall,x,y;;} {Country}(x) land {Country}(y) land {Borders}(x,y) land {}$ $lnot ({MapColor}(x) = {MapColor}(y))$. 4. ${forall,x,y;;} ({Country}(x) land {Country}(y) land {Borders}(x,y) ) {:;{Rightarrow}:;}{MapColor}(xneq y)$. Exercise 11 Consider a vocabulary with the following symbols:&gt; ${Occupation}(p,o)$: Predicate. Person $p$ has occupation $o$.&gt; ${Customer}(p1,p2)$: Predicate. Person $p1$ is a customer of person $p2$.&gt; ${Boss}(p1,p2)$: Predicate. Person $p1$ is a boss of person $p2$.&gt; ${Doctor}$, $ {Surgeon}$, $ {Lawyer}$, $ {Actor}$: Constants denoting occupations.&gt; ${Emily}$, $ {Joe}$: Constants denoting people.Use these symbols to write the following assertions in first-orderlogic:1. Emily is either a surgeon or a lawyer.2. Joe is an actor, but he also holds another job.3. All surgeons are doctors.4. Joe does not have a lawyer (i.e., is not a customer of any lawyer).5. Emily has a boss who is a lawyer.6. There exists a lawyer all of whose customers are doctors.7. Every surgeon has a lawyer. Exercise 12 In each of the following we give an English sentence and a number ofcandidate logical expressions. For each of the logical expressions,state whether it (1) correctly expresses the English sentence; (2) issyntactically invalid and therefore meaningless; or (3) is syntacticallyvalid but does not express the meaning of the English sentence.1. Every cat loves its mother or father. 1. ${forall,x;;} {Cat}(x) {:;{Rightarrow}:;}{Loves}(x,{Mother}(x)lor {Father}(x))$. 2. ${forall,x;;} lnot {Cat}(x) lor {Loves}(x,{Mother}(x)) lor {Loves}(x,{Father}(x))$. 3. ${forall,x;;} {Cat}(x) land ({Loves}(x,{Mother}(x))lor {Loves}(x,{Father}(x)))$.2. Every dog who loves one of its brothers is happy. 1. ${forall,x;;} {Dog}(x) land (exists y {Brother}(y,x) land {Loves}(x,y)) {:;{Rightarrow}:;}{Happy}(x)$. 2. ${forall,x,y;;} {Dog}(x) land {Brother}(y,x) land {Loves}(x,y) {:;{Rightarrow}:;}{Happy}(x)$. 3. ${forall,x;;} {Dog}(x) land [{forall,y;;} {Brother}(y,x) {;;{Leftrightarrow};;}{Loves}(x,y)] {:;{Rightarrow}:;}{Happy}(x)$.3. No dog bites a child of its owner. 1. ${forall,x;;} {Dog}(x) {:;{Rightarrow}:;}lnot {Bites}(x,{Child}({Owner}(x)))$. 2. $lnot {exists,x,y;;} {Dog}(x) land {Child}(y,{Owner}(x)) land {Bites}(x,y)$. 3. ${forall,x;;} {Dog}(x) {:;{Rightarrow}:;}({forall,y;;} {Child}(y,{Owner}(x)) {:;{Rightarrow}:;}lnot {Bites}(x,y))$. 4. $lnot {exists,x;;} {Dog}(x) {:;{Rightarrow}:;}({exists,y;;} {Child}(y,{Owner}(x)) land {Bites}(x,y))$.4. Everyone’s zip code within a state has the same first digit. 1. ${forall,x,s,z_1;;} [{State}(s) land {LivesIn}(x,s) land {Zip}(x)z_1] {:;{Rightarrow}:;}{}$ $[{forall,y,z_2;;} {LivesIn}(y,s) land {Zip}(y)z_2 {:;{Rightarrow}:;}{Digit}(1,z_1) {Digit}(1,z_2) ]$. 2. ${forall,x,s;;} [{State}(s) land {LivesIn}(x,s) land {exists,z_1;;} {Zip}(x)z_1] {:;{Rightarrow}:;}{}$ $ [{forall,y,z_2;;} {LivesIn}(y,s) land {Zip}(y)z_2 land {Digit}(1,z_1) {Digit}(1,z_2) ]$. 3. ${forall,x,y,s;;} {State}(s) land {LivesIn}(x,s) land {LivesIn}(y,s) {:;{Rightarrow}:;}{Digit}(1,{Zip}(x){Zip}(y))$. 4. ${forall,x,y,s;;} {State}(s) land {LivesIn}(x,s) land {LivesIn}(y,s) {:;{Rightarrow}:;}{}$ ${Digit}(1,{Zip}(x)) {Digit}(1,{Zip}(y))$. Exercise 13 (language-determination-exercise) Complete the following exercisesabout logical sentences:1. Translate into *good, natural* English (no $x$s or $y$s!):$${forall,x,y,l;;} SpeaksLanguage(x, l) land SpeaksLanguage(y, l) implies Understands(x, y) land Understands(y,x).$$2. Explain why this sentence is entailed by the sentence$${forall,x,y,l;;} SpeaksLanguage(x, l) land SpeaksLanguage(y, l) implies Understands(x, y).$$3. Translate into first-order logic the following sentences: 1. Understanding leads to friendship. 2. Friendship is transitive. Remember to define all predicates, functions, and constants you use. Exercise 14 True or false? Explain.1. ${exists,x;;} x{Rumpelstiltskin}$ is a valid (necessarily true) sentence of first-order logic.2. Every existentially quantified sentence in first-order logic is true in any model that contains exactly one object.3. ${forall,x,y;;} xy$is satisfiable. Exercise 15 (Peano-completion-exercise) Rewrite the first two Peano axioms inSection Peano-section as a single axiom that defines${NatNum}(x)$ so as to exclude the possibility of natural numbersexcept for those generated by the successor function. Exercise 16 (wumpus-diagnostic-exercise) Equation (pit-biconditional-equation) onpage pit-biconditional-equation defines the conditions under which a square isbreezy. Here we consider two other ways to describe this aspect of thewumpus world.1. We can write [diagnostic rule] leading from observed effects to hidden causes. For finding pits, the obvious diagnostic rules say that if a square is breezy, some adjacent square must contain a pit; and if a square is not breezy, then no adjacent square contains a pit. Write these two rules in first-order logic and show that their conjunction is logically equivalent to Equation (pit-biconditional-equation).2. We can write [causal rule] leading from cause to effect. One obvious causal rule is that a pit causes all adjacent squares to be breezy. Write this rule in first-order logic, explain why it is incomplete compared to Equation (pit-biconditional-equation), and supply the missing axiom. Exercise 17 (kinship-exercise) Write axioms describing the predicates${Grandchild}$, ${Greatgrandparent}$, ${Ancestor}$, ${Brother}$,${Sister}$, ${Daughter}$, ${Son}$, ${FirstCousin}$,${BrotherInLaw}$, ${SisterInLaw}$, ${Aunt}$, and ${Uncle}$. Findout the proper definition of $m$th cousin $n$ times removed, and writethe definition in first-order logic. Now write down the basic factsdepicted in the family tree in Figure family1-figure.Using a suitable logical reasoning system, it all the sentences you havewritten down, and it who are Elizabeth’s grandchildren, Diana’sbrothers-in-law, Zara’s great-grandparents, and Eugenie’s ancestors. A typical family tree. The symbol $bowtie$ connects spouses and arrows point to children. Exercise 18 Write down a sentence asserting that + is a commutative function. Doesyour sentence follow from the Peano axioms? If so, explain why; if not,give a model in which the axioms are true and your sentence is false. Exercise 19 Explain what is wrong with the following proposed definition of the setmembership predicate $$ {forall,x,s;;} x in {x|s} $$ $$ {forall,x,s;;} x in s implies {forall,y;;} x in {y|s} $$ Exercise 20 (list-representation-exercise) Using the set axioms as examples, writeaxioms for the list domain, including all the constants, functions, andpredicates mentioned in the chapter. Exercise 21 (adjacency-exercise) Explain what is wrong with the following proposeddefinition of adjacent squares in the wumpus world:$${forall,x,y;;} {Adjacent}([x,y], [x+1, y]) land {Adjacent}([x,y], [x, y+1]) .$$ Exercise 22 Write out the axioms required for reasoning about the wumpus’s location,using a constant symbol ${Wumpus}$ and a binary predicate${At}({Wumpus}, {Location})$. Remember that there is only onewumpus. Exercise 23 Assuming predicates ${Parent}(p,q)$ and ${Female}(p)$ and constants${Joan}$ and ${Kevin}$, with the obvious meanings, express each ofthe following sentences in first-order logic. (You may use theabbreviation $exists^{1}$ to mean “there exists exactly one.”)1. Joan has a daughter (possibly more than one, and possibly sons as well).2. Joan has exactly one daughter (but may have sons as well).3. Joan has exactly one child, a daughter.4. Joan and Kevin have exactly one child together.5. Joan has at least one child with Kevin, and no children with anyone else. Exercise 24 Arithmetic assertions can be written in first-order logic with thepredicate symbol $&lt;$, the function symbols ${+}$ and ${times}$, and theconstant symbols 0 and 1. Additional predicates can also be defined withbiconditionals.1. Represent the property “$x$ is an even number.”2. Represent the property “$x$ is prime.”3. Goldbach’s conjecture is the conjecture (unproven as yet) that every even number is equal to the sum of two primes. Represent this conjecture as a logical sentence. Exercise 25 In Chapter csp-chapter, we used equality to indicatethe relation between a variable and its value. For instance, we wrote${WA}{red}$ to mean that Western Australia is coloredred. Representing this in first-order logic, we must write moreverbosely ${ColorOf}({WA}){red}$. What incorrectinference could be drawn if we wrote sentences such as${WA}{red}$ directly as logical assertions? Exercise 26 Write in first-order logic the assertion that every key and at least oneof every pair of socks will eventually be lost forever, using only thefollowing vocabulary: ${Key}(x)$, $x$ is a key; ${Sock}(x)$, $x$ isa sock; ${Pair}(x,y)$, $x$ and $y$ are a pair; ${Now}$, the currenttime; ${Before}(t_1,t_2)$, time $t_1$ comes before time $t_2$;${Lost}(x,t)$, object $x$ is lost at time $t$. Exercise 27 For each of the following sentences in English, decide if theaccompanying first-order logic sentence is a good translation. If not,explain why not and correct it. (Some sentences may have more than oneerror!)1. No two people have the same social security number. $$lnot {exists,x,y,n;;} {Person}(x) land {Person}(y) {:;{Rightarrow}:;}[{HasSS}#(x,n) land {HasSS}#(y,n)].$$2. John’s social security number is the same as Mary’s. $${exists,n;;} {HasSS}#({John},n) land {HasSS}#({Mary},n).$$3. Everyone’s social security number has nine digits. $${forall,x,n;;} {Person}(x) {:;{Rightarrow}:;}[{HasSS}#(x,n) land {Digits}(n,9)].$$4. Rewrite each of the above (uncorrected) sentences using a function symbol ${SS}#$ instead of the predicate ${HasSS}#$. Exercise 28 Translate into first-order logic the sentence “Everyone’s DNA is uniqueand is derived from their parents’ DNA.” You must specify the preciseintended meaning of your vocabulary terms. (*Hint*: Do notuse the predicate ${Unique}(x)$, since uniqueness is not really aproperty of an object in itself!) Exercise 29 For each of the following sentences in English, decide if theaccompanying first-order logic sentence is a good translation. If not,explain why not and correct it.1. Any apartment in London has lower rent than some apartments in Paris.$$forall {x} [{Apt}(x) land {In}(x,{London})]implies exists {y} ([{Apt}(y) land {In}(y,{Paris})] implies ({Rent}(x) &lt; {Rent}(y)))$$2. There is exactly one apartment in Paris with rent below $1000.$$exists {x} {Apt}(x) land {In}(x,{Paris}) land forall{y} [{Apt}(y) land {In}(y,{Paris}) land ({Rent}(y) &lt; {Dollars}(1000))] implies (y = x)$$3. If an apartment is more expensive than all apartments in London, it must be in Moscow.$$forall{x} {Apt}(x) land [forall{y} {Apt}(y) land {In}(y,{London}) land ({Rent}(x) &gt; {Rent}(y))] implies{In}(x,{Moscow}).$$ Exercise 30 Represent the following sentences in first-order logic, using aconsistent vocabulary (which you must define):1. Some students took French in spring 2001.2. Every student who takes French passes it.3. Only one student took Greek in spring 2001.4. The best score in Greek is always higher than the best score in French.5. Every person who buys a policy is smart.6. No person buys an expensive policy.7. There is an agent who sells policies only to people who are not insured.8. There is a barber who shaves all men in town who do not shave themselves.9. A person born in the UK, each of whose parents is a UK citizen or a UK resident, is a UK citizen by birth.10. A person born outside the UK, one of whose parents is a UK citizen by birth, is a UK citizen by descent.11. Politicians can fool some of the people all of the time, and they can fool all of the people some of the time, but they can’t fool all of the people all of the time.12. All Greeks speak the same language. (Use ${Speaks}(x,l)$ to mean that person $x$ speaks language $l$.) Exercise 31 Represent the following sentences in first-order logic, using aconsistent vocabulary (which you must define):1. Some students took French in spring 2001.2. Every student who takes French passes it.3. Only one student took Greek in spring 2001.4. The best score in Greek is always higher than the best score in French.5. Every person who buys a policy is smart.6. No person buys an expensive policy.7. There is an agent who sells policies only to people who are not insured.8. There is a barber who shaves all men in town who do not shave themselves.9. A person born in the UK, each of whose parents is a UK citizen or a UK resident, is a UK citizen by birth.10. A person born outside the UK, one of whose parents is a UK citizen by birth, is a UK citizen by descent.11. Politicians can fool some of the people all of the time, and they can fool all of the people some of the time, but they can’t fool all of the people all of the time.12. All Greeks speak the same language. (Use ${Speaks}(x,l)$ to mean that person $x$ speaks language $l$.) Exercise 32 Write a general set of facts and axioms to represent the assertion“Wellington heard about Napoleon’s death” and to correctly answer thequestion “Did Napoleon hear about Wellington’s death?” Exercise 33 (4bit-adder-exercise) Extend the vocabulary fromSection circuits-section to define addition for $n$-bitbinary numbers. Then encode the description of the four-bit adder inFigure 4bit-adder-figure, and pose the queries neededto verify that it is in fact correct. A four-bit adder. Each ${Ad}_i$ is a one-bit adder, as in figure adder-figure on page &lt;a href=""#"&gt;adder-figure&lt;/a&gt; Exercise 34 The circuit representation in the chapter is more detailed thannecessary if we care only about circuit functionality. A simplerformulation describes any $m$-input, $n$-output gate or circuit using apredicate with $m+n$ arguments, such that the predicate is true exactlywhen the inputs and outputs are consistent. For example, NOT gates aredescribed by the binary predicate ${NOT}(i,o)$, for which${NOT}(0,1)$ and ${NOT}(1,0)$ are known. Compositions of gates aredefined by conjunctions of gate predicates in which shared variablesindicate direct connections. For example, a NAND circuit can be composedfrom ${AND}$s and ${NOT}$s:$${forall,i_1,i_2,o_a,o;;} {AND}(i_1,i_2,o_a) land {NOT}(o_a,o) {:;{Rightarrow}:;}{NAND}(i_1,i_2,o) .$$Using this representation, define the one-bit adder inFigure adder-figure and the four-bit adder inFigure adder-figure, and explain what queries youwould use to verify the designs. What kinds of queries are*not* supported by this representation that*are* supported by the representation inSection circuits-section? Exercise 35 Obtain a passport application for your country, identify the rulesdetermining eligibility for a passport, and translate them intofirst-order logic, following the steps outlined inSection circuits-section Exercise 36 Consider a first-order logical knowledge base that describes worldscontaining people, songs, albums (e.g., “Meet the Beatles”) and disks(i.e., particular physical instances of CDs). The vocabulary containsthe following symbols:&gt; ${CopyOf}(d,a)$: Predicate. Disk $d$ is a copy of album $a$.&gt; ${Owns}(p,d)$: Predicate. Person $p$ owns disk $d$.&gt; ${Sings}(p,s,a)$: Album $a$ includes a recording of song $s$ sung by person $p$.&gt; ${Wrote}(p,s)$: Person $p$ wrote song $s$.&gt; ${McCartney}$, ${Gershwin}$, ${BHoliday}$, ${Joe}$, ${EleanorRigby}$, ${TheManILove}$, ${Revolver}$: Constants with the obvious meanings.Express the following statements in first-order logic:1. Gershwin wrote “The Man I Love.”2. Gershwin did not write “Eleanor Rigby.”3. Either Gershwin or McCartney wrote “The Man I Love.”4. Joe has written at least one song.5. Joe owns a copy of *Revolver*.6. Every song that McCartney sings on *Revolver* was written by McCartney.7. Gershwin did not write any of the songs on *Revolver*.8. Every song that Gershwin wrote has been recorded on some album. (Possibly different songs are recorded on different albums.)9. There is a single album that contains every song that Joe has written.10. Joe owns a copy of an album that has Billie Holiday singing “The Man I Love.”11. Joe owns a copy of every album that has a song sung by McCartney. (Of course, each different album is instantiated in a different physical CD.)12. Joe owns a copy of every album on which all the songs are sung by Billie Holiday. Exercise 1 Prove that Universal Instantiation is sound and that ExistentialInstantiation produces an inferentially equivalent knowledge base. Exercise 2 From ${Likes}({Jerry},{IceCream})$ it seems reasonable to infer${exists,x;;}{Likes}(x,{IceCream})$. Write down a general inference rule, , thatsanctions this inference. State carefully the conditions that must besatisfied by the variables and terms involved. Exercise 3 Suppose a knowledge base contains just one sentence,$exists,x {AsHighAs}(x,{Everest})$. Which of the following arelegitimate results of applying Existential Instantiation?1. ${AsHighAs}({Everest},{Everest})$.2. ${AsHighAs}({Kilimanjaro},{Everest})$.3. ${AsHighAs}({Kilimanjaro},{Everest}) land {AsHighAs}({BenNevis},{Everest})$ (after two applications). Exercise 4 For each pair of atomic sentences, give the most general unifier if itexists:1. $P(A,B,B)$, $P(x,y,z)$.2. $Q(y,G(A,B))$, $Q(G(x,x),y)$.3. ${Older}({Father}(y),y)$, ${Older}({Father}(x),{John})$.4. ${Knows}({Father}(y),y)$, ${Knows}(x,x)$. Exercise 5 For each pair of atomic sentences, give the most general unifier if itexists:1. $P(A,B,B)$, $P(x,y,z)$.2. $Q(y,G(A,B))$, $Q(G(x,x),y)$.3. ${Older}({Father}(y),y)$, ${Older}({Father}(x),{John})$.4. ${Knows}({Father}(y),y)$, ${Knows}(x,x)$. Exercise 6 (subsumption-lattice-exercise) Consider the subsumption lattices shownin Figure subsumption-lattice-figure(page subsumption-lattice-figure.1. Construct the lattice for the sentence ${Employs}({Mother}({John}),{Father}({Richard}))$.2. Construct the lattice for the sentence ${Employs}({IBM},y)$ (“Everyone works for IBM”). Remember to include every kind of query that unifies with the sentence.3. Assume that indexes each sentence under every node in its subsumption lattice. Explain how should work when some of these sentences contain variables; use as examples the sentences in (a) and (b) and the query ${Employs}(x,{Father}(x))$. Exercise 7 (fol-horses-exercise) Write down logical representations for thefollowing sentences, suitable for use with Generalized Modus Ponens:1. Horses, cows, and pigs are mammals.2. An offspring of a horse is a horse.3. Bluebeard is a horse.4. Bluebeard is Charlie’s parent.5. Offspring and parent are inverse relations.6. Every mammal has a parent. Exercise 8 These questions concern concern issues with substitution andSkolemization.1. Given the premise ${forall,x;;} {exists,y;;} P(x,y)$, it is not valid to conclude that ${exists,q;;} P(q,q)$. Give an example of a predicate $P$ where the first is true but the second is false.2. Suppose that an inference engine is incorrectly written with the occurs check omitted, so that it allows a literal like $P(x,F(x))$ to be unified with $P(q,q)$. (As mentioned, most standard implementations of Prolog actually do allow this.) Show that such an inference engine will allow the conclusion ${exists,y;;} P(q,q)$ to be inferred from the premise ${forall,x;;} {exists,y;;} P(x,y)$.3. Suppose that a procedure that converts first-order logic to clausal form incorrectly Skolemizes ${forall,x;;} {exists,y;;} P(x,y)$ to $P(x,Sk0)$—that is, it replaces $y$ by a Skolem constant rather than by a Skolem function of $x$. Show that an inference engine that uses such a procedure will likewise allow ${exists,q;;} P(q,q)$ to be inferred from the premise ${forall,x;;} {exists,y;;} P(x,y)$.4. A common error among students is to suppose that, in unification, one is allowed to substitute a term for a Skolem constant instead of for a variable. For instance, they will say that the formulas $P(Sk1)$ and $P(A)$ can be unified under the substitution ${ Sk1/A }$. Give an example where this leads to an invalid inference. Exercise 9 This question considers Horn KBs, such as the following:$$begin{array}{l}P(F(x)) {:;{Rightarrow}:;}P(x)Q(x) {:;{Rightarrow}:;}P(F(x))P(A)Q(B)end{array}$$ Let FC be a breadth-first forward-chaining algorithm thatrepeatedly adds all consequences of currently satisfied rules; let BC bea depth-first left-to-right backward-chaining algorithm that triesclauses in the order given in the KB. Which of the following are true?1. FC will infer the literal $Q(A)$.2. FC will infer the literal $P(B)$.3. If FC has failed to infer a given literal, then it is not entailed by the KB.4. BC will return ${true}$ given the query $P(B)$.5. If BC does not return ${true}$ given a query literal, then it is not entailed by the KB. Exercise 10 (csp-clause-exercise) Explain how to write any given 3-SAT problem ofarbitrary size using a single first-order definite clause and no morethan 30 ground facts. Exercise 11 Suppose you are given the following axioms: 1. $0 leq 3$. 2. $7 leq 9$. 3. ${forall,x;;} ; ; x leq x$. 4. ${forall,x;;} ; ; x leq x+0$. 5. ${forall,x;;} ; ; x+0 leq x$. 6. ${forall,x,y;;} ; ; x+y leq y+x$. 7. ${forall,w,x,y,z;;} ; ; w leq y$ $wedge$ $x leq z$ ${:;{Rightarrow}:;}$ $w+x leq y+z$. 8. ${forall,x,y,z;;} ; ; x leq y wedge y leq z : {:;{Rightarrow}:;}: x leq z$ 1. Give a backward-chaining proof of the sentence $7 leq 3+9$. (Be sure, of course, to use only the axioms given here, not anything else you may know about arithmetic.) Show only the steps that leads to success, not the irrelevant steps.2. Give a forward-chaining proof of the sentence $7 leq 3+9$. Again, show only the steps that lead to success. Exercise 12 Suppose you are given the following axioms:&gt; 1. $0 leq 4$.&gt; 2. $5 leq 9$.&gt; 3. ${forall,x;;} ; ; x leq x$.&gt; 4. ${forall,x;;} ; ; x leq x+0$.&gt; 5. ${forall,x;;} ; ; x+0 leq x$.&gt; 6. ${forall,x,y;;} ; ; x+y leq y+x$.&gt; 7. ${forall,w,x,y,z;;} ; ; w leq y$ $wedge$ $x leq z {:;{Rightarrow}:;}$ $w+x leq y+z$.&gt; 8. ${forall,x,y,z;;} ; ; x leq y wedge y leq z : {:;{Rightarrow}:;}: x leq z$1. Give a backward-chaining proof of the sentence $5 leq 4+9$. (Be sure, of course, to use only the axioms given here, not anything else you may know about arithmetic.) Show only the steps that leads to success, not the irrelevant steps.2. Give a forward-chaining proof of the sentence $5 leq 4+9$. Again, show only the steps that lead to success. Exercise 13 A popular children’s riddle is “Brothers and sisters have I none, butthat man’s father is my father’s son.” Use the rules of the familydomain (Section kinship-domain-section onpage kinship-domain-section to show who that man is. You may apply any of theinference methods described in this chapter. Why do you think that thisriddle is difficult? Exercise 14 Suppose we put into a logical knowledge base a segment of theU.S. census data listing the age, city of residence, date of birth, andmother of every person, using social security numbers as identifyingconstants for each person. Thus, George’s age is given by${Age}(mbox443-{65}-{1282}}, {56})$. Which of the followingindexing schemes S1–S5 enable an efficient solution for which of thequeries Q1–Q4 (assuming normal backward chaining)?- S1: an index for each atom in each position.- S2: an index for each first argument.- S3: an index for each predicate atom.- S4: an index for each combination of predicate and first argument.- S5: an index for each combination of predicate and second argument and an index for each first argument.- Q1: ${Age}(mbox 443-44-4321,x)$- Q2: ${ResidesIn}(x,{Houston})$- Q3: ${Mother}(x,y)$- Q4: ${Age}(x,{34}) land {ResidesIn}(x,{TinyTownUSA})$ Exercise 15 (standardize-failure-exercise) One might suppose that we can avoid theproblem of variable conflict in unification during backward chaining bystandardizing apart all of the sentences in the knowledge base once andfor all. Show that, for some sentences, this approach cannot work.(Hint: Consider a sentence in which one part unifies withanother.) Exercise 16 In this exercise, use the sentences you wrote inExercise fol-horses-exercise to answer a question byusing a backward-chaining algorithm.1. Draw the proof tree generated by an exhaustive backward-chaining algorithm for the query ${exists,h;;}{Horse}(h)$, where clauses are matched in the order given.2. What do you notice about this domain?3. How many solutions for $h$ actually follow from your sentences?4. Can you think of a way to find all of them? (Hint: See Smith+al:1986.) Exercise 17 (bc-trace-exercise) Trace the execution of the backward-chainingalgorithm in Figure backward-chaining-algorithm(page backward-chaining-algorithm when it is applied to solve the crime problem(page west-problem-page. Show the sequence of values taken on by the${goals}$ variable, and arrange them into a tree. Exercise 18 The following Prolog code defines a predicate P. (Rememberthat uppercase terms are variables, not constants, in Prolog.) P(X,[X|Y]). P(X,[Y|Z]) :- P(X,Z).1. Show proof trees and solutions for the queries P(A,[2,1,3]) and P(2,[1,A,3]).2. What standard list operation does P represent? Exercise 19 The following Prolog code defines a predicate P. (Rememberthat uppercase terms are variables, not constants, in Prolog.) P(X,[X|Y]). P(X,[Y|Z]) :- P(X,Z).1. Show proof trees and solutions for the queries P(A,[1,2,3]) and P(2,[1,A,3]).2. What standard list operation does P represent? Exercise 20 This exercise looks at sorting in Prolog.1. Write Prolog clauses that define the predicate sorted(L), which is true if and only if list L is sorted in ascending order.2. Write a Prolog definition for the predicate perm(L,M), which is true if and only if L is a permutation of M.3. Define sort(L,M) (M is a sorted version of L) using perm and sorted.4. Run sort on longer and longer lists until you lose patience. What is the time complexity of your program?5. Write a faster sorting algorithm, such as insertion sort or quicksort, in Prolog. Exercise 21 (diff-simplify-exercise) This exercise looks at the recursiveapplication of rewrite rules, using logic programming. A rewrite rule(or demodulator in terminology) is anequation with a specified direction. For example, the rewrite rule$x+0 rightarrow x$ suggests replacing any expression that matches $x+0$with the expression $x$. Rewrite rules are a key component of equationalreasoning systems. Use the predicate rewrite(X,Y) torepresent rewrite rules. For example, the earlier rewrite rule iswritten as rewrite(X+0,X). Some terms areprimitive and cannot be further simplified; thus, wewrite primitive(0) to say that 0 is a primitive term.1. Write a definition of a predicate simplify(X,Y), that is true when Y is a simplified version of X—that is, when no further rewrite rules apply to any subexpression of Y.2. Write a collection of rules for the simplification of expressions involving arithmetic operators, and apply your simplification algorithm to some sample expressions.3. Write a collection of rewrite rules for symbolic differentiation, and use them along with your simplification rules to differentiate and simplify expressions involving arithmetic expressions, including exponentiation. Exercise 22 This exercise considers the implementation of search algorithms inProlog. Suppose that successor(X,Y) is true when stateY is a successor of state X; and thatgoal(X) is true when X is a goal state. Writea definition for solve(X,P), which means thatP is a path (list of states) beginning with X,ending in a goal state, and consisting of a sequence of legal steps asdefined by successor. You will find that depth-first searchis the easiest way to do this. How easy would it be to add heuristicsearch control? Exercise 23 Suppose a knowledge base contains just the following first-order Hornclauses:$$Ancestor(Mother(x),x)$$$$Ancestor(x,y) land Ancestor(y,z) implies Ancestor(x,z)$$Consider a forward chaining algorithm that, on the $j$th iteration,terminates if the KB contains a sentence that unifies with the query,else adds to the KB every atomic sentence that can be inferred from thesentences already in the KB after iteration $j-1$.1. For each of the following queries, say whether the algorithm will (1) give an answer (if so, write down that answer); or (2) terminate with no answer; or (3) never terminate. 1. $Ancestor(Mother(y),John)$ 2. $Ancestor(Mother(Mother(y)),John)$ 3. $Ancestor(Mother(Mother(Mother(y))),Mother(y))$ 4. $Ancestor(Mother(John),Mother(Mother(John)))$2. Can a resolution algorithm prove the sentence $lnot Ancestor(John,John)$ from the original knowledge base? Explain how, or why not.3. Suppose we add the assertion that $lnot(Mother(x)x)$ and augment the resolution algorithm with inference rules for equality. Now what is the answer to (b)? Exercise 24 Let $cal L$ be the first-order language with a single predicate$S(p,q)$, meaning “$p$ shaves  $q$.” Assume a domain of people.1. Consider the sentence “There exists a person $P$ who shaves every one who does not shave themselves, and only people that do not shave themselves.” Express this in $cal L$.2. Convert the sentence in (a) to clausal form.3. Construct a resolution proof to show that the clauses in (b) are inherently inconsistent. (Note: you do not need any additional axioms.) Exercise 25 How can resolution be used to show that a sentence is valid?Unsatisfiable? Exercise 26 Construct an example of two clauses that can be resolved together in twodifferent ways giving two different outcomes. Exercise 27 From “Horses are animals,” it follows that “The head of a horse is thehead of an animal.” Demonstrate that this inference is valid by carryingout the following steps:1. Translate the premise and the conclusion into the language of first-order logic. Use three predicates: ${HeadOf}(h,x)$ (meaning “$h$ is the head of $x$”), ${Horse}(x)$, and ${Animal}(x)$.2. Negate the conclusion, and convert the premise and the negated conclusion into conjunctive normal form.3. Use resolution to show that the conclusion follows from the premise. Exercise 28 From “Sheep are animals,” it follows that “The head of a sheep is thehead of an animal.” Demonstrate that this inference is valid by carryingout the following steps:1. Translate the premise and the conclusion into the language of first-order logic. Use three predicates: ${HeadOf}(h,x)$ (meaning “$h$ is the head of $x$”), ${Sheep}(x)$, and ${Animal}(x)$.2. Negate the conclusion, and convert the premise and the negated conclusion into conjunctive normal form.3. Use resolution to show that the conclusion follows from the premise. Exercise 29 (quantifier-order-exercise) Here are two sentences in the language offirst-order logic:- (A) ${forall,x;;} {exists,y;;} ( x geq y )$- (B) ${exists,y;;} {forall,x;;} ( x geq y )$1. Assume that the variables range over all the natural numbers $0,1,2,ldots, infty$ and that the “$geq$” predicate means “is greater than or equal to.” Under this interpretation, translate (A) and (B) into English.2. Is (A) true under this interpretation?3. Is (B) true under this interpretation?4. Does (A) logically entail (B)?5. Does (B) logically entail (A)?6. Using resolution, try to prove that (A) follows from (B). Do this even if you think that (B) does not logically entail (A); continue until the proof breaks down and you cannot proceed (if it does break down). Show the unifying substitution for each resolution step. If the proof fails, explain exactly where, how, and why it breaks down.7. Now try to prove that (B) follows from (A). Exercise 30 Resolution can produce nonconstructive proofs for queries withvariables, so we had to introduce special mechanisms to extract definiteanswers. Explain why this issue does not arise with knowledge basescontaining only definite clauses. Exercise 31 We said in this chapter that resolution cannot be used to generate alllogical consequences of a set of sentences. Can any algorithm do this? Exercise 1 Consider a robot whose operation is described by the following PDDLoperators:$$Op({Go(x,y)},{At(Robot,x)},{lnot At(Robot,x) land At(Robot,y)})$$$$Op({Pick(o)},{At(Robot,x)land At(o,x)},{lnot At(o,x) land Holding(o)})$$$$Op({Drop(o)},{At(Robot,x)land Holding(o)},{At(o,x) land lnot Holding(o)}$$1. The operators allow the robot to hold more than one object. Show how to modify them with an $EmptyHand$ predicate for a robot that can hold only one object.2. Assuming that these are the only actions in the world, write a successor-state axiom for $EmptyHand$.Exercise 2 Describe the differences and similarities between problem solving andplanning.Exercise 3 Given the action schemas and initial statefrom Figure airport-pddl-algorithm, what are all theapplicable concrete instances of ${Fly}(p,{from},{to})$ in thestate described by$$At(P_1,JFK) land At(P_2,SFO) land Plane(P_1) land Plane(P_2) land Airport(JFK) land Airport(SFO)?$$Exercise 4 The monkey-and-bananas problem is faced by a monkey in a laboratory withsome bananas hanging out of reach from the ceiling. A box is availablethat will enable the monkey to reach the bananas if he climbs on it.Initially, the monkey is at $A$, the bananas at $B$, and the box at $C$.The monkey and box have height ${Low}$, but if the monkey climbs ontothe box he will have height ${High}$, the same as the bananas. Theactions available to the monkey include ${Go}$ from one place toanother, ${Push}$ an object from one place to another, ${ClimbUp}$onto or ${ClimbDown}$ from an object, and ${Grasp}$ or ${Ungrasp}$an object. The result of a ${Grasp}$ is that the monkey holds theobject if the monkey and object are in the same place at the sameheight.1. Write down the initial state description.2. Write the six action schemas.3. Suppose the monkey wants to fool the scientists, who are off to tea, by grabbing the bananas, but leaving the box in its original place. Write this as a general goal (i.e., not assuming that the box is necessarily at C) in the language of situation calculus. Can this goal be solved by a classical planning system?4. Your schema for pushing is probably incorrect, because if the object is too heavy, its position will remain the same when the ${Push}$ schema is applied. Fix your action schema to account for heavy objects.Exercise 5 The original {Strips} planner was designed to control Shakey the robot.Figure shakey-figure shows a version of Shakey’s worldconsisting of four rooms lined up along a corridor, where each room hasa door and a light switch. The actions in Shakey’s world include moving from place to place,pushing movable objects (such as boxes), climbing onto and down fromrigid objects (such as boxes), and turning light switches on and off.The robot itself could not climb on a box or toggle a switch, but theplanner was capable of finding and printing out plans that were beyondthe robot’s abilities. Shakey’s six actions are the following:- ${Go}(x,y,r)$, which requires that Shakey be ${At}$ $x$ and that $x$ and $y$ are locations ${In}$ the same room $r$. By convention a door between two rooms is in both of them.- Push a box $b$ from location $x$ to location $y$ within the same room: ${Push}(b,x,y,r)$. You will need the predicate ${Box}$ and constants for the boxes.- Climb onto a box from position $x$: ${ClimbUp}(x, b)$; climb down from a box to position $x$: ${ClimbDown}(b, x)$. We will need the predicate ${On}$ and the constant ${Floor}$.- Turn a light switch on or off: ${TurnOn}(s,b)$; ${TurnOff}(s,b)$. To turn a light on or off, Shakey must be on top of a box at the light switch’s location.Write PDDL sentences for Shakey’s six actions and the initial state fromConstruct a plan for Shakey toget ${Box}{}_2$ into ${Room}{}_2$. Shakey's world. Shakey can move between landmarks within a room, can pass through the door between rooms, can climb climbable objects and push pushable objects, and can flip light switches. Exercise 6 A finite Turing machine has a finite one-dimensional tape of cells, eachcell containing one of a finite number of symbols. One cell has a readand write head above it. There is a finite set of states the machine canbe in, one of which is the accept state. At each time step, depending onthe symbol on the cell under the head and the machine’s current state,there are a set of actions we can choose from. Each action involveswriting a symbol to the cell under the head, transitioning the machineto a state, and optionally moving the head left or right. The mappingthat determines which actions are allowed is the Turing machine’sprogram. Your goal is to control the machine into the accept state.Represent the Turing machine acceptance problem as a planning problem.If you can do this, it demonstrates that determining whether a planningproblem has a solution is at least as hard as the Turing acceptanceproblem, which is PSPACE-hard.Exercise 7 (negative-effects-exercise) Explain why dropping negative effects fromevery action schema results in a relaxed problem, provided thatpreconditions and goals contain only positive literals.Exercise 8 (sussman-anomaly-exercise) Figure sussman-anomaly-figure(page sussman-anomaly-figure) shows a blocks-world problem that is known as the {Sussman anomaly}.The problem was considered anomalous because the noninterleaved plannersof the early 1970s could not solve it. Write a definition of the problemand solve it, either by hand or with a planning program. Anoninterleaved planner is a planner that, when given two subgoals$G_{1}$ and $G_{2}$, produces either a plan for $G_{1}$ concatenatedwith a plan for $G_{2}$, or vice versa. Can a noninterleaved plannersolve this problem? How, or why not?Exercise 9 Prove that backward search with PDDL problems is complete.Exercise 10 Construct levels 0, 1, and 2 of the planning graph for the problem inFigure airport-pddl-algorithmExercise 11 (graphplan-proof-exercise) Prove the following assertions aboutplanning graphs:1. A literal that does not appear in the final level of the graph cannot be achieved.2. The level cost of a literal in a serial graph is no greater than the actual cost of an optimal plan for achieving it.Exercise 12 We saw that planning graphs can handle only propositional actions. Whatif we want to use planning graphs for a problem with variables in thegoal, such as ${At}(P_{1}, x) land {At}(P_{2}, x)$, where $x$ is assumed to be bound by anexistential quantifier that ranges over a finite domain of locations?How could you encode such a problem to work with planning graphs?Exercise 13 The set-level heuristic (see page set-level-page uses a planning graphto estimate the cost of achieving a conjunctive goal from the currentstate. What relaxed problem is the set-level heuristic the solution to?Exercise 14 Examine the definition of **bidirectionalsearch** in Chapter search-chapter.1. Would bidirectional state-space search be a good idea for planning?2. What about bidirectional search in the space of partial-order plans?3. Devise a version of partial-order planning in which an action can be added to a plan if its preconditions can be achieved by the effects of actions already in the plan. Explain how to deal with conflicts and ordering constraints. Is the algorithm essentially identical to forward state-space search?Exercise 15 We contrasted forward and backward state-space searchers withpartial-order planners, saying that the latter is a plan-space searcher.Explain how forward and backward state-space search can also beconsidered plan-space searchers, and say what the plan refinementoperators are.Exercise 16 (satplan-preconditions-exercise) Up to now we have assumed that theplans we create always make sure that an action’s preconditions aresatisfied. Let us now investigate what propositional successor-stateaxioms such as ${HaveArrow}^{t+1} {;;{Leftrightarrow};;}{}$$({HaveArrow}^tland lnot {Shoot}^t)$ have to say about actions whose preconditionsare not satisfied.1. Show that the axioms predict that nothing will happen when an action is executed in a state where its preconditions are not satisfied.2. Consider a plan $p$ that contains the actions required to achieve a goal but also includes illegal actions. Is it the case that$$initial state land successor-state axioms landp {models} goal ?$$3. With first-order successor-state axioms in situation calculus, is it possible to prove that a plan containing illegal actions will achieve the goal?Exercise 17 (strips-translation-exercise) Consider how to translate a set of actionschemas into the successor-state axioms of situation calculus.1. Consider the schema for ${Fly}(p,{from},{to})$. Write a logical definition for the predicate ${Poss}({Fly}(p,{from},{to}),s)$, which is true if the preconditions for ${Fly}(p,{from},{to})$ are satisfied in situation $s$.2. Next, assuming that ${Fly}(p,{from},{to})$ is the only action schema available to the agent, write down a successor-state axiom for ${At}(p,x,s)$ that captures the same information as the action schema.3. Now suppose there is an additional method of travel: ${Teleport}(p,{from},{to})$. It has the additional precondition $lnot {Warped}(p)$ and the additional effect ${Warped}(p)$. Explain how the situation calculus knowledge base must be modified.4. Finally, develop a general and precisely specified procedure for carrying out the translation from a set of action schemas to a set of successor-state axioms.Exercise 18 (disjunctive-satplan-exercise) In the $SATPlan$ algorithm inFigure satplan-agent-algorithm (page satplan-agent-algorithm,each call to the satisfiability algorithm asserts a goal $g^T$, where$T$ ranges from 0 to $T_{max}$. Suppose instead that thesatisfiability algorithm is called only once, with the goal$g^0 vee g^1 vee cdots vee g^{T_{max}}$. 1. Will this always return a plan if one exists with length less than or equal to $T_{max}$? 2. Does this approach introduce any new spurious “solutions”?3. Discuss how one might modify a satisfiability algorithm such as $WalkSAT$ so that it finds short solutions (if they exist) when given a disjunctive goal of this form.Exercise 1 The goals we have considered so far all ask the planner to make theworld satisfy the goal at just one time step. Not all goals can beexpressed this way: you do not achieve the goal of suspending achandelier above the ground by throwing it in the air. More seriously,you wouldn’t want your spacecraft life-support system to supply oxygenone day but not the next. A maintenance goal is achievedwhen the agent’s plan causes a condition to hold continuously from agiven state onward. Describe how to extend the formalism of this chapterto support maintenance goals.Exercise 2 You have a number of trucks with which to deliver a set of packages.Each package starts at some location on a grid map, and has adestination somewhere else. Each truck is directly controlled by movingforward and turning. Construct a hierarchy of high-level actions forthis problem. What knowledge about the solution does your hierarchyencode?Exercise 3 (HLA-unique-exercise) Suppose that a high-level action has exactly oneimplementation as a sequence of primitive actions. Give an algorithm forcomputing its preconditions and effects, given the complete refinementhierarchy and schemas for the primitive actions.Exercise 4 Suppose that the optimistic reachable set of a high-level plan is asuperset of the goal set; can anything be concluded about whether theplan achieves the goal? What if the pessimistic reachable set doesn’tintersect the goal set? Explain.Exercise 5 (HLA-progression-exercise) Write an algorithm that takes an initialstate (specified by a set of propositional literals) and a sequence ofHLAs (each defined by preconditions and angelic specifications ofoptimistic and pessimistic reachable sets) and computes optimistic andpessimistic descriptions of the reachable set of the sequence.Exercise 6 In Figure jobshop-cpm-figure we showed how to describeactions in a scheduling problem by using separate fields for , , and .Now suppose we wanted to combine scheduling with nondeterministicplanning, which requires nondeterministic and conditional effects.Consider each of the three fields and explain if they should remainseparate fields, or if they should become effects of the action. Give anexample for each of the three.Exercise 7 Some of the operations in standard programming languages can be modeledas actions that change the state of the world. For example, theassignment operation changes the contents of a memory location, and theprint operation changes the state of the output stream. A programconsisting of these operations can also be considered as a plan, whosegoal is given by the specification of the program. Therefore, planningalgorithms can be used to construct programs that achieve a givenspecification. 1. Write an action schema for the assignment operator (assigning the value of one variable to another). Remember that the original value will be overwritten! 2. Show how object creation can be used by a planner to produce a plan for exchanging the values of two variables by using a temporary variable. Exercise 8 Consider the following argument: In a framework that allows uncertaininitial states, nondeterministic effectsare just a notational convenience, not a source of additionalrepresentational power. For any action schema $a$ with nondeterministiceffect $P lor Q$, we could always replace it with the conditionaleffects ${~R{:}~P} land{~lnot R{:}~Q}$, which in turn can bereduced to two regular actions. The proposition $R$ stands for a randomproposition that is unknown in the initial state and for which there areno sensing actions. Is this argument correct? Consider separately twocases, one in which only one instance of action schema $a$ is in theplan, the other in which more than one instance is.Exercise 9 (conformant-flip-literal-exercise) Suppose the ${Flip}$ actionalways changes the truth value of variable $L$. Show how to define itseffects by using an action schema with conditional effects. Show that,despite the use of conditional effects, a 1-CNF belief staterepresentation remains in 1-CNF after a ${Flip}$.Exercise 10 In the blocks world we were forced to introduce two action schemas,${Move}$ and ${MoveToTable}$, in order to maintain the ${Clear}$predicate properly. Show how conditional effects can be used torepresent both of these cases with a single action.Exercise 11 (alt-vacuum-exercise) Conditional effects were illustrated for the${Suck}$ action in the vacuum world—which square becomes clean dependson which square the robot is in. Can you think of a new set ofpropositional variables to define states of the vacuum world, such that${Suck}$ has an unconditional description? Write outthe descriptions of ${Suck}$, ${Left}$, and ${Right}$, using yourpropositions, and demonstrate that they suffice to describe all possiblestates of the world.Exercise 12 Find a suitably dirty carpet, free of obstacles, and vacuum it. Draw thepath taken by the vacuum cleaner as accurately as you can. Explain it,with reference to the forms of planning discussed in this chapter.Exercise 13 The following quotes are from the backs of shampoo bottles. Identifyeach as an unconditional, conditional, or execution-monitoring plan. (a)“Lather. Rinse. Repeat.” (b) “Apply shampoo to scalp and let it remainfor several minutes. Rinse and repeat if necessary.” (c) “See a doctorif problems persist.”Exercise 14 Consider the following problem: A patient arrives at the doctor’s officewith symptoms that could have been caused either by dehydration or bydisease $D$ (but not both). There are two possible actions: ${Drink}$,which unconditionally cures dehydration, and ${Medicate}$, which curesdisease $D$ but has an undesirable side effect if taken when the patientis dehydrated. Write the problem description, and diagram a sensorlessplan that solves the problem, enumerating all relevant possible worlds.Exercise 15 To the medication problem in the previous exercise, add a ${Test}$action that has the conditional effect ${CultureGrowth}$ when${Disease}$ is true and in any case has the perceptual effect${Known}({CultureGrowth})$. Diagram a conditional plan that solvesthe problem and minimizes the use of the ${Medicate}$ action.Exercise 1 Define an ontology in first-order logic for tic-tac-toe. The ontologyshould contain situations, actions, squares, players, marks (X, O, orblank), and the notion of winning, losing, or drawing a game. Alsodefine the notion of a forced win (or draw): a position from which aplayer can force a win (or draw) with the right sequence of actions.Write axioms for the domain. (Note: The axioms that enumerate thedifferent squares and that characterize the winning positions are ratherlong. You need not write these out in full, but indicate clearly whatthey look like.)Exercise 2 You are to create a system for advising computer science undergraduateson what courses to take over an extended period in order to satisfy theprogram requirements. (Use whatever requirements are appropriate foryour institution.) First, decide on a vocabulary for representing allthe information, and then represent it; then formulate a query to thesystem that will return a legal program of study as a solution. Youshould allow for some tailoring to individual students, in that yoursystem should ask what courses or equivalents the student has alreadytaken, and not generate programs that repeat those courses.Suggest ways in which your system could be improved—for example to takeinto account knowledge about student preferences, the workload, good andbad instructors, and so on. For each kind of knowledge, explain how itcould be expressed logically. Could your system easily incorporate thisinformation to find all feasible programs of study for a student? Couldit find the best program?Exercise 3 Figure ontology-figure shows the top levels of ahierarchy for everything. Extend it to include as many real categoriesas possible. A good way to do this is to cover all the things in youreveryday life. This includes objects and events. Start with waking up,and proceed in an orderly fashion noting everything that you see, touch,do, and think about. For example, a random sampling produces music,news, milk, walking, driving, gas, Soda Hall, carpet, talking, ProfessorFateman, chicken curry, tongue, $ 7, sun, the daily newspaper, and so on.You should produce both a single hierarchy chart (on a large sheet ofpaper) and a listing of objects and categories with the relationssatisfied by members of each category. Every object should be in acategory, and every category should be in the hierarchy.Exercise 4 (windows-exercise) Develop a representational system for reasoningabout windows in a window-based computer interface. In particular, yourrepresentation should be able to describe:- The state of a window: minimized, displayed, or nonexistent.- Which window (if any) is the active window.- The position of every window at a given time.- The order (front to back) of overlapping windows.- The actions of creating, destroying, resizing, and moving windows; changing the state of a window; and bringing a window to the front. Treat these actions as atomic; that is, do not deal with the issue of relating them to mouse actions. Give axioms describing the effects of actions on fluents. You may use either event or situation calculus.Assume an ontology containing situations,actions, integers (for $x$ and $y$coordinates) and windows. Define a language over thisontology; that is, a list of constants, function symbols, and predicateswith an English description of each. If you need to add more categoriesto the ontology (e.g., pixels), you may do so, but be sure to specifythese in your write-up. You may (and should) use symbols defined in thetext, but be sure to list these explicitly.Exercise 5 State the following in the language you developed for the previousexercise:1. In situation $S_0$, window $W_1$ is behind $W_2$ but sticks out on the top and bottom. Do not state exact coordinates for these; describe the general situation.2. If a window is displayed, then its top edge is higher than its bottom edge.3. After you create a window $w$, it is displayed.4. A window can be minimized only if it is displayed.Exercise 6 State the following in the language you developed for the previousexercise:1. In situation $S_0$, window $W_1$ is behind $W_2$ but sticks out on the top and bottom. Do not state exact coordinates for these; describe the general situation.2. If a window is displayed, then its top edge is higher than its bottom edge.3. After you create a window $w$, it is displayed.4. A window can be minimized only if it is displayed.Exercise 7 (Adapted from an example by Doug Lenat.) Your mission is to capture, inlogical form, enough knowledge to answer a series of questions about thefollowing simple scenario: Yesterday John went to the North Berkeley Safeway supermarket and bought two pounds of tomatoes and a pound of ground beef.Start by trying to represent the content of the sentence as a series ofassertions. You should write sentences that have straightforward logicalstructure (e.g., statements that objects have certain properties, thatobjects are related in certain ways, that all objects satisfying oneproperty satisfy another). The following might help you get started:- Which classes, objects, and relations would you need? What are their parents, siblings and so on? (You will need events and temporal ordering, among other things.)- Where would they fit in a more general hierarchy?- What are the constraints and interrelationships among them?- How detailed must you be about each of the various concepts?To answer the questions below, your knowledge base must includebackground knowledge. You’ll have to deal with what kind of things areat a supermarket, what is involved with purchasing the things oneselects, what the purchases will be used for, and so on. Try to makeyour representation as general as possible. To give a trivial example:don’t say “People buy food from Safeway,” because that won’t help youwith those who shop at another supermarket. Also, don’t turn thequestions into answers; for example, question (c) asks “Did John buy anymeat?”—not “Did John buy a pound of ground beef?”Sketch the chains of reasoning that would answer the questions. Ifpossible, use a logical reasoning system to demonstrate the sufficiencyof your knowledge base. Many of the things you write might be onlyapproximately correct in reality, but don’t worry too much; the idea isto extract the common sense that lets you answer these questions at all.A truly complete answer to this question is extremelydifficult, probably beyond the state of the art of current knowledgerepresentation. But you should be able to put together a consistent setof axioms for the limited questions posed here.1. Is John a child or an adult? [Adult]2. Does John now have at least two tomatoes? [Yes]3. Did John buy any meat? [Yes]4. If Mary was buying tomatoes at the same time as John, did he see her? [Yes]5. Are the tomatoes made in the supermarket? [No]6. What is John going to do with the tomatoes? [Eat them]7. Does Safeway sell deodorant? [Yes]8. Did John bring some money or a credit card to the supermarket? [Yes]9. Does John have less money after going to the supermarket? [Yes]Exercise 8 Make the necessary additions or changes to your knowledge base from theprevious exercise so that the questions that follow can be answered.Include in your report a discussion of your changes, explaining why theywere needed, whether they were minor or major, and what kinds ofquestions would necessitate further changes.1. Are there other people in Safeway while John is there? [Yes—staff!]2. Is John a vegetarian? [No]3. Who owns the deodorant in Safeway? [Safeway Corporation]4. Did John have an ounce of ground beef? [Yes]5. Does the Shell station next door have any gas? [Yes]6. Do the tomatoes fit in John’s car trunk? [Yes]Exercise 9 Represent the following seven sentences using and extending therepresentations developed in the chapter: 1. Water is a liquid between 0 and 100 degrees.2. Water boils at 100 degrees.3. The water in John’s water bottle is frozen.4. Perrier is a kind of water.5. John has Perrier in his water bottle.6. All liquids have a freezing point.7. A liter of water weighs more than a liter of alcohol.Exercise 10 (part-decomposition-exercise) Write definitions for the following:1. ${ExhaustivePartDecomposition}$2. ${PartPartition}$3. ${PartwiseDisjoint}$These should be analogous to the definitions for${ExhaustiveDecomposition}$, ${Partition}$, and ${Disjoint}$. Isit the case that ${PartPartition}(s,{BunchOf}(s))$? If so, prove it;if not, give a counterexample and define sufficient conditions underwhich it does hold.Exercise 11 (alt-measure-exercise) An alternative scheme for representing measuresinvolves applying the units function to an abstract length object. Insuch a scheme, one would write ${Inches}({Length}(L_1)) = {1.5}$.How does this scheme compare with the one in the chapter? Issues includeconversion axioms, names for abstract quantities (such as “50 dollars”),and comparisons of abstract measures in different units (50 inches ismore than 50 centimeters).Exercise 12 Write a set of sentences that allows one to calculate the price of anindividual tomato (or other object), given the price per pound. Extendthe theory to allow the price of a bag of tomatoes to be calculated.Exercise 13 (namematch-exercise) Add sentences to extend the definition of thepredicate ${Name}(s, c)$ so that a string such as “laptop computer”matches the appropriate category names from a variety of stores. Try tomake your definition general. Test it by looking at ten online stores,and at the category names they give for three different categories. Forexample, for the category of laptops, we found the names “Notebooks,”“Laptops,” “Notebook Computers,” “Notebook,” “Laptops and Notebooks,”and “Notebook PCs.” Some of these can be covered by explicit ${Name}$facts, while others could be covered by sentences for handling plurals,conjunctions, etc.Exercise 14 Write event calculus axioms to describe the actions in the wumpus world.Exercise 15 State the interval-algebra relation that holds between every pair of thefollowing real-world events:&gt; $LK$: The life of President Kennedy.&gt; $IK$: The infancy of President Kennedy.&gt; $PK$: The presidency of President Kennedy.&gt; $LJ$: The life of President Johnson.&gt; $PJ$: The presidency of President Johnson.&gt; $LO$: The life of President Obama.Exercise 16 This exercise concerns the problem of planning a route for a robot totake from one city to another. The basic action taken by the robot is${Go}(x,y)$, which takes it from city $x$ to city $y$ if there is aroute between those cities. ${Road}(x, y)$ is true if and only ifthere is a road connecting cities $x$ and $y$; if there is, then${Distance}(x, y)$ gives the length of the road. See the map onpage romania-distances-figure for an example. The robot begins in Arad and mustreach Bucharest.1. Write a suitable logical description of the initial situation of the robot.2. Write a suitable logical query whose solutions provide possible paths to the goal.3. Write a sentence describing the ${Go}$ action.4. Now suppose that the robot consumes fuel at the rate of .02 gallons per mile. The robot starts with 20 gallons of fuel. Augment your representation to include these considerations.5. Now suppose some of the cities have gas stations at which the robot can fill its tank. Extend your representation and write all the rules needed to describe gas stations, including the ${Fillup}$ action.Exercise 17 Investigate ways to extend the event calculus to handlesimultaneous events. Is it possible to avoid acombinatorial explosion of axioms?Exercise 18 (exchange-rates-exercise) Construct a representation for exchange ratesbetween currencies that allows for daily fluctuations.Exercise 19 (fixed-definition-exercise) Define the predicate ${Fixed}$, where${Fixed}({Location}(x))$ means that the location of object $x$ isfixed over time.Exercise 20 Describe the event of trading something for something else. Describebuying as a kind of trading in which one of the objects traded is a sumof money.Exercise 21 The two preceding exercises assume a fairly primitive notion ofownership. For example, the buyer starts by owning thedollar bills. This picture begins to break down when, for example, one’smoney is in the bank, because there is no longer any specific collectionof dollar bills that one owns. The picture is complicated still furtherby borrowing, leasing, renting, and bailment. Investigate the variouscommonsense and legal concepts of ownership, and propose a scheme bywhich they can be represented formally.Exercise 22 (card-on-forehead-exercise)(Adapted from Fagin+al:1995.) Consider a game playedwith a deck of just 8 cards, 4 aces and 4 kings. The three players,Alice, Bob, and Carlos, are dealt two cards each. Without looking atthem, they place the cards on their foreheads so that the other playerscan see them. Then the players take turns either announcing that theyknow what cards are on their own forehead, thereby winning the game, orsaying “I don’t know.” Everyone knows the players are truthful and areperfect at reasoning about beliefs.1. Game 1. Alice and Bob have both said “I don’t know.” Carlos sees that Alice has two aces (A-A) and Bob has two kings (K-K). What should Carlos say? (Hint: consider all three possible cases for Carlos: A-A, K-K, A-K.)2. Describe each step of Game 1 using the notation of modal logic.3. Game 2. Carlos, Alice, and Bob all said “I don’t know” on their first turn. Alice holds K-K and Bob holds A-K. What should Carlos say on his second turn?4. Game 3. Alice, Carlos, and Bob all say “I don’t know” on their first turn, as does Alice on her second turn. Alice and Bob both hold A-K. What should Carlos say?5. Prove that there will always be a winner to this game.Exercise 23 The assumption of logical omniscience, discussed onpage logical-omniscience, is of course not true of any actual reasoners.Rather, it is an idealization of the reasoning processthat may be more or less acceptable depending on the applications.Discuss the reasonableness of the assumption for each of the followingapplications of reasoning about knowledge:1. Partial knowledge adversary games, such as card games. Here one player wants to reason about what his opponent knows about the state of the game.2. Chess with a clock. Here the player may wish to reason about the limits of his opponent’s or his own ability to find the best move in the time available. For instance, if player A has much more time left than player B, then A will sometimes make a move that greatly complicates the situation, in the hopes of gaining an advantage because he has more time to work out the proper strategy.3. A shopping agent in an environment in which there are costs of gathering information.4. Reasoning about public key cryptography, which rests on the intractability of certain computational problems.Exercise 24 The assumption of logical omniscience, discussed onpage logical-omniscience, is of course not true of any actual reasoners.Rather, it is an idealization of the reasoning processthat may be more or less acceptable depending on the applications.Discuss the reasonableness of the assumption for each of the followingapplications of reasoning about knowledge:1. Partial knowledge adversary games, such as card games. Here one player wants to reason about what his opponent knows about the state of the game.2. Chess with a clock. Here the player may wish to reason about the limits of his opponent’s or his own ability to find the best move in the time available. For instance, if player A has much more time left than player B, then A will sometimes make a move that greatly complicates the situation, in the hopes of gaining an advantage because he has more time to work out the proper strategy.3. A shopping agent in an environment in which there are costs of gathering information.4. Reasoning about public key cryptography, which rests on the intractability of certain computational problems.Exercise 25 Translate the following description logic expression (frompage description-logic-ex) into first-order logic, and comment on the result:$$And(Man, AtLeast(3,Son), AtMost(2,Daughter), All(Son,And(Unemployed,Married, All(Spouse,Doctor ))), All(Daughter,And(Professor, Fills(Department ,Physics,Math))))$$Exercise 26 Recall that inheritance information in semantic networks can be capturedlogically by suitable implication sentences. This exercise investigatesthe efficiency of using such sentences for inheritance.1. Consider the information in a used-car catalog such as Kelly’s Blue Book—for example, that 1973 Dodge vans are (or perhaps were once) worth 575. Suppose all this information (for 11,000 models) is encoded as logical sentences, as suggested in the chapter. Write down three such sentences, including that for 1973 Dodge vans. How would you use the sentences to find the value of a particular car, given a backward-chaining theorem prover such as Prolog?2. Compare the time efficiency of the backward-chaining method for solving this problem with the inheritance method used in semantic nets.3. Explain how forward chaining allows a logic-based system to solve the same problem efficiently, assuming that the KB contains only the 11,000 sentences about prices.4. Describe a situation in which neither forward nor backward chaining on the sentences will allow the price query for an individual car to be handled efficiently.5. Can you suggest a solution enabling this type of query to be solved efficiently in all cases in logic systems? Hint: Remember that two cars of the same year and model have the same price.)Exercise 27 (natural-stupidity-exercise) One might suppose that the syntacticdistinction between unboxed links and singly boxed links in semanticnetworks is unnecessary, because singly boxed links are always attachedto categories; an inheritance algorithm could simply assume that anunboxed link attached to a category is intended to apply to all membersof that category. Show that this argument is fallacious, giving examplesof errors that would arise.Exercise 28 One part of the shopping process that was not covered in this chapter ischecking for compatibility between items. For example, if a digitalcamera is ordered, what accessory batteries, memory cards, and cases arecompatible with the camera? Write a knowledge base that can determinethe compatibility of a set of items and suggest replacements oradditional items if the shopper makes a choice that is not compatible.The knowledge base should works with at least one line of products andextend easily to other lines.Exercise 29 (shopping-grammar-exercise) A complete solution to the problem ofinexact matches to the buyer’s description in shopping is very difficultand requires a full array of natural language processing and informationretrieval techniques. (See Chapters nlp1-chapterand nlp-english-chapter.) One small step is to allow the user tospecify minimum and maximum values for various attributes. The buyermust use the following grammar for product descriptions:$$Description rightarrow Category space [Connector space Modifier]*$$$$Connector rightarrow "with" space | "and" | ","$$$$Modifier rightarrow Attribute space |space Attribute space Op space Value$$$$Op rightarrow "=" | "gt" | "lt"$$Here, ${Category}$ names a product category, ${Attribute}$ is somefeature such as “CPU” or “price,” and ${Value}$ is the target valuefor the attribute. So the query “computer with at least a 2.5 GHz CPUfor under 500” must be re-expressed as “computer with CPU $&gt;$ 2.5 GHzand price $&lt;$ 500.” Implement a shopping agent that accepts descriptionsin this language.Exercise 30 (buying-exercise) Our description of Internet shopping omitted theall-important step of actually buying the product.Provide a formal logical description of buying, using event calculus.That is, define the sequence of events that occurs when a buyer submitsa credit-card purchase and then eventually gets billed and receives theproduct.Exercise 1 Show from first principles that $P(abland a) = 1$.Exercise 2 (sum-to-1-exercise) Using the axioms of probability, prove that anyprobability distribution on a discrete random variable must sum to 1.Exercise 3 For each of the following statements, either prove it is true or give acounterexample.1. If $P(a b, c) = P(b a, c)$, then $P(a c) = P(b c)$ 2. If $P(a b, c) = P(a)$, then $P(b c) = P(b)$ 3. If $P(a b) = P(a)$, then $P(a b, c) = P(a c)$Exercise 4 Would it be rational for an agent to hold the three beliefs$P(A) {0.4}$, $P(B) {0.3}$, and$P(A lor B) {0.5}$? If so, what range of probabilities wouldbe rational for the agent to hold for $A land B$? Make up a table likethe one in Figure de-finetti-table, and show how itsupports your argument about rationality. Then draw another version ofthe table where $P(A lor B){0.7}$. Explain why it is rational to have this probability,even though the table shows one case that is a loss and three that justbreak even. (Hint: what is Agent 1 committed to about theprobability of each of the four cases, especially the case that is aloss?)Exercise 5 (exclusive-exhaustive-exercise) This question deals with the propertiesof possible worlds, defined on page possible-worlds-page as assignments to allrandom variables. We will work with propositions that correspond toexactly one possible world because they pin down the assignments of allthe variables. In probability theory, such propositions are called atomic event. Forexample, with Boolean variables $X_1$, $X_2$, $X_3$, the proposition$x_1land lnot x_2 land lnot x_3$ fixes the assignment of thevariables; in the language of propositional logic, we would say it hasexactly one model.1. Prove, for the case of $n$ Boolean variables, that any two distinct atomic events are mutually exclusive; that is, their conjunction is equivalent to ${false}$.2. Prove that the disjunction of all possible atomic events is logically equivalent to ${true}$.3. Prove that any proposition is logically equivalent to the disjunction of the atomic events that entail its truth.Exercise 6 (inclusion-exclusion-exercise) ProveEquation (kolmogorov-disjunction-equation) fromEquations basic-probability-axiom-equationand (proposition-probability-equation.Exercise 7 Consider the set of all possible five-card poker hands dealt fairly froma standard deck of fifty-two cards.1. How many atomic events are there in the joint probability distribution (i.e., how many five-card hands are there)?2. What is the probability of each atomic event?3. What is the probability of being dealt a royal straight flush? Four of a kind?Exercise 8 Given the full joint distribution shown inFigure dentist-joint-table, calculate the following:1. $textbf{P}({toothache})$.2. $textbf{P}({Cavity})$.3. $textbf{P}({Toothache}{cavity})$.4. $textbf{P}({Cavity}{toothache}lor {catch})$.Exercise 9 Given the full joint distribution shown inFigure dentist-joint-table, calculate the following:1. $textbf{P}({toothache})$.2. $textbf{P}({Catch})$.3. $textbf{P}({Cavity}{catch})$.4. $textbf{P}({Cavity}{toothache}lor {catch})$.Exercise 10 (unfinished-game-exercise) In his letter of August 24, 1654, Pascalwas trying to show how a pot of money should be allocated when agambling game must end prematurely. Imagine a game where each turnconsists of the roll of a die, player E gets a point whenthe die is even, and player O gets a point when the dieis odd. The first player to get 7 points wins the pot. Suppose the gameis interrupted with E leading 4–2. How should the moneybe fairly split in this case? What is the general formula? (Fermat andPascal made several errors before solving the problem, but you should beable to get it right the first time.)Exercise 11 Deciding to put probability theory to good use, we encounter a slotmachine with three independent wheels, each producing one of the foursymbols bar, bell, lemon, orcherry with equal probability. The slot machine has thefollowing payout scheme for a bet of 1 coin (where “?” denotes that wedon’t care what comes up for that wheel): &gt; bar/bar/bar pays 20 coins&gt; bell/bell/bell pays 15 coins&gt; lemon/lemon/lemon pays 5 coins&gt; cherry/cherry/cherry pays 3 coins&gt; cherry/cherry/? pays 2 coins&gt; cherry/?/? pays 1 coin1. Compute the expected “payback” percentage of the machine. In other words, for each coin played, what is the expected coin return?2. Compute the probability that playing the slot machine once will result in a win.3. Estimate the mean and median number of plays you can expect to make until you go broke, if you start with 10 coins. You can run a simulation to estimate this, rather than trying to compute an exact answer.Exercise 12 Deciding to put probability theory to good use, we encounter a slotmachine with three independent wheels, each producing one of the foursymbols bar, bell, lemon, orcherry with equal probability. The slot machine has thefollowing payout scheme for a bet of 1 coin (where “?” denotes that wedon’t care what comes up for that wheel): &gt; bar/bar/bar pays 20 coins&gt; bell/bell/bell pays 15 coins&gt; lemon/lemon/lemon pays 5 coins&gt; cherry/cherry/cherry pays 3 coins&gt; cherry/cherry/? pays 2 coins&gt; cherry/?/? pays 1 coin1. Compute the expected “payback” percentage of the machine. In other words, for each coin played, what is the expected coin return?2. Compute the probability that playing the slot machine once will result in a win.3. Estimate the mean and median number of plays you can expect to make until you go broke, if you start with 10 coins. You can run a simulation to estimate this, rather than trying to compute an exact answer.Exercise 13 We wish to transmit an $n$-bit message to a receiving agent. The bits inthe message are independently corrupted (flipped) during transmissionwith $epsilon$ probability each. With an extra parity bit sent alongwith the original information, a message can be corrected by thereceiver if at most one bit in the entire message (including the paritybit) has been corrupted. Suppose we want to ensure that the correctmessage is received with probability at least $1-delta$. What is themaximum feasible value of $n$? Calculate this value for the case$epsilon0.001$, $delta0.01$.Exercise 14 We wish to transmit an $n$-bit message to a receiving agent. The bits inthe message are independently corrupted (flipped) during transmissionwith $epsilon$ probability each. With an extra parity bit sent alongwith the original information, a message can be corrected by thereceiver if at most one bit in the entire message (including the paritybit) has been corrupted. Suppose we want to ensure that the correctmessage is received with probability at least $1-delta$. What is themaximum feasible value of $n$? Calculate this value for the case$epsilon0.002$, $delta0.01$.Exercise 15 (independence-exercise) Show that the three forms of independence inEquation (independence-equation) are equivalent.Exercise 16 Consider two medical tests, A and B, for a virus. Test A is 95%effective at recognizing the virus when it is present, but has a 10%false positive rate (indicating that the virus is present, when it isnot). Test B is 90% effective at recognizing the virus, but has a 5%false positive rate. The two tests use independent methods ofidentifying the virus. The virus is carried by 1% of all people. Saythat a person is tested for the virus using only one of the tests, andthat test comes back positive for carrying the virus. Which testreturning positive is more indicative of someone really carrying thevirus? Justify your answer mathematically.Exercise 17 Suppose you are given a coin that lands ${heads}$ with probability $x$and ${tails}$ with probability $1 - x$. Are the outcomes of successiveflips of the coin independent of each other given that you know thevalue of $x$? Are the outcomes of successive flips of the coinindependent of each other if you do not know the value of$x$? Justify your answer.Exercise 18 After your yearly checkup, the doctor has bad news and good news. Thebad news is that you tested positive for a serious disease and that thetest is 99% accurate (i.e., the probability of testing positive when youdo have the disease is 0.99, as is the probability of testing negativewhen you don’t have the disease). The good news is that this is a raredisease, striking only 1 in 10,000 people of your age. Why is it goodnews that the disease is rare? What are the chances that you actuallyhave the disease?Exercise 19 After your yearly checkup, the doctor has bad news and good news. Thebad news is that you tested positive for a serious disease and that thetest is 99% accurate (i.e., the probability of testing positive when youdo have the disease is 0.99, as is the probability of testing negativewhen you don’t have the disease). The good news is that this is a raredisease, striking only 1 in 100,000 people of your age. Why is it goodnews that the disease is rare? What are the chances that you actuallyhave the disease?Exercise 20 (conditional-bayes-exercise) It is quite often useful to consider theeffect of some specific propositions in the context of some generalbackground evidence that remains fixed, rather than in the completeabsence of information. The following questions ask you to prove moregeneral versions of the product rule and Bayes’ rule, with respect tosome background evidence $textbf{e}$: 1. Prove the conditionalized version of the general product rule: $${textbf{P}}(X,Y textbf{e}) = {textbf{P}}(XY,textbf{e}) {textbf{P}}(Ytextbf{e}) .$$ 2. Prove the conditionalized version of Bayes’ rule in Equation (conditional-bayes-equation). Exercise 21 (pv-xyz-exercise) Show that the statement of conditional independence$${textbf{P}}(X,Y Z) = {textbf{P}}(XZ) {textbf{P}}(YZ)$$is equivalent to each of the statements$${textbf{P}}(XY,Z) = {textbf{P}}(XZ) quadmbox{and}quad {textbf{P}}(YX,Z) = {textbf{P}}(YZ) .$$Exercise 22 Suppose you are given a bag containing $n$ unbiased coins. You are toldthat $n-1$ of these coins are normal, with heads on one side and tailson the other, whereas one coin is a fake, with heads on both sides. 1. Suppose you reach into the bag, pick out a coin at random, flip it, and get a head. What is the (conditional) probability that the coin you chose is the fake coin? 2. Suppose you continue flipping the coin for a total of $k$ times after picking it and see $k$ heads. Now what is the conditional probability that you picked the fake coin? 3. Suppose you wanted to decide whether the chosen coin was fake by flipping it $k$ times. The decision procedure returns ${fake}$ if all $k$ flips come up heads; otherwise it returns ${normal}$. What is the (unconditional) probability that this procedure makes an error?Exercise 23 (normalization-exercise) In this exercise, you will complete thenormalization calculation for the meningitis example. First, make up asuitable value for $P(slnot m)$, and use it to calculateunnormalized values for $P(ms)$ and $P(lnot m s)$(i.e., ignoring the $P(s)$ term in the Bayes’ rule expression,Equation (meningitis-bayes-equation). Now normalizethese values so that they add to 1.Exercise 24 This exercise investigates the way in which conditional independencerelationships affect the amount of information needed for probabilisticcalculations.1. Suppose we wish to calculate $P(he_1,e_2)$ and we have no conditional independence information. Which of the following sets of numbers are sufficient for the calculation? 1. ${textbf{P}}(E_1,E_2)$, ${textbf{P}}(H)$, ${textbf{P}}(E_1H)$, ${textbf{P}}(E_2H)$ 2. ${textbf{P}}(E_1,E_2)$, ${textbf{P}}(H)$, ${textbf{P}}(E_1,E_2H)$ 3. ${textbf{P}}(H)$, ${textbf{P}}(E_1H)$, ${textbf{P}}(E_2H)$2. Suppose we know that ${textbf{P}}(E_1H,E_2)={textbf{P}}(E_1H)$ for all values of $H$, $E_1$, $E_2$. Now which of the three sets are sufficient?Exercise 25 Let $X$, $Y$, $Z$ be Boolean random variables. Label the eight entriesin the joint distribution ${textbf{P}}(X,Y,Z)$ as $a$ through$h$. Express the statement that $X$ and $Y$ are conditionallyindependent given $Z$, as a set of equations relating $a$ through $h$.How many nonredundantequations are there?Exercise 26 (Adapted from Pearl [Pearl:1988].) Suppose you are a witness to anighttime hit-and-run accident involving a taxi in Athens. All taxis inAthens are blue or green. You swear, under oath, that the taxi was blue.Extensive testing shows that, under the dim lighting conditions,discrimination between blue and green is 75% reliable. 1. Is it possible to calculate the most likely color for the taxi? (*Hint:* distinguish carefully between the proposition that the taxi *is* blue and the proposition that it *appears* blue.) 2. What if you know that 9 out of 10 Athenian taxis are green?Exercise 27 Write out a general algorithm for answering queries of the form${textbf{P}}({Cause}textbf{e})$, using a naive Bayesdistribution. Assume that the evidence $textbf{e}$ may assign values toany subset of the effect variables.Exercise 28 (naive-bayes-retrieval-exercise) Text categorization is the task ofassigning a given document to one of a fixed set of categories on thebasis of the text it contains. Naive Bayes models are often used forthis task. In these models, the query variable is the document category,and the “effect” variables are the presence or absence of each word inthe language; the assumption is that words occur independently indocuments, with frequencies determined by the document category.1. Explain precisely how such a model can be constructed, given as “training data” a set of documents that have been assigned to categories.2. Explain precisely how to categorize a new document.3. Is the conditional independence assumption reasonable? Discuss.Exercise 29 In our analysis of the wumpus world, we used the fact thateach square contains a pit with probability 0.2, independently of thecontents of the other squares. Suppose instead that exactly $N/5$ pitsare scattered at random among the $N$ squares other than [1,1]. Arethe variables $P_{i,j}$ and $P_{k,l}$ still independent? What is thejoint distribution ${textbf{P}}(P_{1,1},ldots,P_{4,4})$ now?Redo the calculation for the probabilities of pits in [1,3] and[2,2].Exercise 30 Redo the probability calculation for pits in [1,3] and [2,2],assuming that each square contains a pit with probability 0.01,independent of the other squares. What can you say about the relativeperformance of a logical versus a probabilistic agent in this case?Exercise 31 Implement a hybrid probabilistic agent for the wumpus world, based onthe hybrid agent inFigure hybrid-wumpus-agent-algorithm and theprobabilistic inference procedure outlined in this chapter.Exercise 1 We have a bag of three biased coins $a$, $b$, and $c$ with probabilitiesof coming up heads of 20%, 60%, and 80%, respectively. One coin is drawnrandomly from the bag (with equal likelihood of drawing each of thethree coins), and then the coin is flipped three times to generate theoutcomes $X_1$, $X_2$, and $X_3$.1. Draw the Bayesian network corresponding to this setup and define the necessary CPTs.2. Calculate which coin was most likely to have been drawn from the bag if the observed flips come out heads twice and tails once.Exercise 2 We have a bag of three biased coins $a$, $b$, and $c$ with probabilitiesof coming up heads of 30%, 60%, and 75%, respectively. One coin is drawnrandomly from the bag (with equal likelihood of drawing each of thethree coins), and then the coin is flipped three times to generate theoutcomes $X_1$, $X_2$, and $X_3$.1. Draw the Bayesian network corresponding to this setup and define the necessary CPTs.2. Calculate which coin was most likely to have been drawn from the bag if the observed flips come out heads twice and tails once.Exercise 3(cpt-equivalence-exercise) Equation (parameter-joint-repn-equation onpage parameter-joint-repn-equation defines the joint distribution represented by aBayesian network in terms of the parameters$theta(X_i{Parents}(X_i))$. This exercise asks you to derivethe equivalence between the parameters and the conditional probabilities${textbf{ P}}(X_i{Parents}(X_i))$ from this definition.1. Consider a simple network $Xrightarrow Yrightarrow Z$ with three Boolean variables. Use Equations (conditional-probability-equation and (marginalization-equation (pages conditional-probability-equation and marginalization-equation) to express the conditional probability $P(zy)$ as the ratio of two sums, each over entries in the joint distribution ${textbf{P}}(X,Y,Z)$.2. Now use Equation (parameter-joint-repn-equation to write this expression in terms of the network parameters $theta(X)$, $theta(YX)$, and $theta(ZY)$.3. Next, expand out the summations in your expression from part (b), writing out explicitly the terms for the true and false values of each summed variable. Assuming that all network parameters satisfy the constraint $sum_{x_i} theta(x_i{parents}(X_i))1$, show that the resulting expression reduces to $theta(zy)$.4. Generalize this derivation to show that $theta(X_i{Parents}(X_i)) = {textbf{P}}(X_i{Parents}(X_i))$ for any Bayesian network.Exercise 4 The arc reversal operation of in a Bayesian network allows us to change the directionof an arc $Xrightarrow Y$ while preserving the joint probabilitydistribution that the network represents Shachter:1986. Arc reversalmay require introducing new arcs: all the parents of $X$ also becomeparents of $Y$, and all parents of $Y$ also become parents of $X$.1. Assume that $X$ and $Y$ start with $m$ and $n$ parents, respectively, and that all variables have $k$ values. By calculating the change in size for the CPTs of $X$ and $Y$, show that the total number of parameters in the network cannot decrease during arc reversal. (Hint: the parents of $X$ and $Y$ need not be disjoint.)2. Under what circumstances can the total number remain constant?3. Let the parents of $X$ be $textbf{U} cup textbf{V}$ and the parents of $Y$ be $textbf{V} cup textbf{W}$, where $textbf{U}$ and $textbf{W}$ are disjoint. The formulas for the new CPTs after arc reversal are as follows: $$begin{aligned} {textbf{P}}(Ytextbf{U},textbf{V},textbf{W}) &amp;=&amp; sum_x {textbf{P}}(Ytextbf{V},textbf{W}, x) {textbf{P}}(xtextbf{U}, textbf{V}) {textbf{P}}(Xtextbf{U},textbf{V},textbf{W}, Y) &amp;=&amp; {textbf{P}}(YX, textbf{V}, textbf{W}) {textbf{P}}(Xtextbf{U}, textbf{V}) / {textbf{P}}(Ytextbf{U},textbf{V},textbf{W}) .end{aligned}$$ Prove that the new network expresses the same joint distribution over all variables as the original network.Exercise 5 Consider the Bayesian network inFigure burglary-figure.1. If no evidence is observed, are ${Burglary}$ and ${Earthquake}$ independent? Prove this from the numerical semantics and from the topological semantics.2. If we observe ${Alarm}{true}$, are ${Burglary}$ and ${Earthquake}$ independent? Justify your answer by calculating whether the probabilities involved satisfy the definition of conditional independence.Exercise 6 Suppose that in a Bayesian network containing an unobserved variable$Y$, all the variables in the Markov blanket ${MB}(Y)$ have beenobserved.1. Prove that removing the node $Y$ from the network will not affect the posterior distribution for any other unobserved variable in the network.2. Discuss whether we can remove $Y$ if we are planning to use (i) rejection sampling and (ii) likelihood weighting. Three possible structures for a Bayesian network describing genetic inheritance of handedness. Exercise 7 (handedness-exercise) Let $H_x$ be a random variable denoting thehandedness of an individual $x$, with possible values $l$ or $r$. Acommon hypothesis is that left- or right-handedness is inherited by asimple mechanism; that is, perhaps there is a gene $G_x$, also withvalues $l$ or $r$, and perhaps actual handedness turns out mostly thesame (with some probability $s$) as the gene an individual possesses.Furthermore, perhaps the gene itself is equally likely to be inheritedfrom either of an individual’s parents, with a small nonzero probability$m$ of a random mutation flipping the handedness.1. Which of the three networks in Figure handedness-figure claim that $ {textbf{P}}(G_{father},G_{mother},G_{child}) = {textbf{P}}(G_{father}){textbf{P}}(G_{mother}){textbf{P}}(G_{child})$?2. Which of the three networks make independence claims that are consistent with the hypothesis about the inheritance of handedness?3. Which of the three networks is the best description of the hypothesis?4. Write down the CPT for the $G_{child}$ node in network (a), in terms of $s$ and $m$.5. Suppose that $P(G_{father}l)=P(G_{mother}l)=q$. In network (a), derive an expression for $P(G_{child}l)$ in terms of $m$ and $q$ only, by conditioning on its parent nodes.6. Under conditions of genetic equilibrium, we expect the distribution of genes to be the same across generations. Use this to calculate the value of $q$, and, given what you know about handedness in humans, explain why the hypothesis described at the beginning of this question must be wrong.Exercise 8 (markov-blanket-exercise) The Markovblanket of a variable is defined on page markov-blanket-page.Prove that a variable is independent of all other variables in thenetwork, given its Markov blanket and deriveEquation (markov-blanket-equation)(page markov-blanket-equation). A Bayesian network describing some features of a car's electrical system and engine. Each variable is Boolean, and the true value indicates that the corresponding aspect of the vehicle is in working order.Exercise 9 Consider the network for car diagnosis shown inFigure car-starts-figure.1. Extend the network with the Boolean variables ${IcyWeather}$ and ${StarterMotor}$.2. Give reasonable conditional probability tables for all the nodes.3. How many independent values are contained in the joint probability distribution for eight Boolean nodes, assuming that no conditional independence relations are known to hold among them?4. How many independent probability values do your network tables contain?5. The conditional distribution for ${Starts}$ could be described as a noisy-AND distribution. Define this family in general and relate it to the noisy-OR distribution.Exercise 10 Consider a simple Bayesian network with root variables ${Cold}$,${Flu}$, and ${Malaria}$ and child variable ${Fever}$, with anoisy-OR conditional distribution for ${Fever}$ as described inSection canonical-distribution-section. By addingappropriate auxiliary variables for inhibition events and fever-inducingevents, construct an equivalent Bayesian network whose CPTs (except forroot variables) are deterministic. Define the CPTs and proveequivalence.Exercise 11 (LG-exercise) Consider the family of linear Gaussian networks, asdefined on page LG-network-page.1. In a two-variable network, let $X_1$ be the parent of $X_2$, let $X_1$ have a Gaussian prior, and let ${textbf{P}}(X_2X_1)$ be a linear Gaussian distribution. Show that the joint distribution $P(X_1,X_2)$ is a multivariate Gaussian, and calculate its covariance matrix.2. Prove by induction that the joint distribution for a general linear Gaussian network on $X_1,ldots,X_n$ is also a multivariate Gaussian.Exercise 12 (multivalued-probit-exercise)The probit distribution defined onpage probit-page describes the probability distribution for a Booleanchild, given a single continuous parent.1. How might the definition be extended to cover multiple continuous parents?2. How might it be extended to handle a multivalued child variable? Consider both cases where the child’s values are ordered (as in selecting a gear while driving, depending on speed, slope, desired acceleration, etc.) and cases where they are unordered (as in selecting bus, train, or car to get to work). (Hint: Consider ways to divide the possible values into two sets, to mimic a Boolean variable.)Exercise 13 In your local nuclear power station, there is an alarm that senses whena temperature gauge exceeds a given threshold. The gauge measures thetemperature of the core. Consider the Boolean variables $A$ (alarmsounds), $F_A$ (alarm is faulty), and $F_G$ (gauge is faulty) and themultivalued nodes $G$ (gauge reading) and $T$ (actual core temperature).1. Draw a Bayesian network for this domain, given that the gauge is more likely to fail when the core temperature gets too high.2. Is your network a polytree? Why or why not?3. Suppose there are just two possible actual and measured temperatures, normal and high; the probability that the gauge gives the correct temperature is $x$ when it is working, but $y$ when it is faulty. Give the conditional probability table associated with $G$.4. Suppose the alarm works correctly unless it is faulty, in which case it never sounds. Give the conditional probability table associated with $A$.5. Suppose the alarm and gauge are working and the alarm sounds. Calculate an expression for the probability that the temperature of the core is too high, in terms of the various conditional probabilities in the network.Exercise 14 (telescope-exercise) Two astronomers in different parts of the worldmake measurements $M_1$ and $M_2$ of the number of stars $N$ in somesmall region of the sky, using their telescopes. Normally, there is asmall possibility $e$ of error by up to one star in each direction. Eachtelescope can also (with a much smaller probability $f$) be badly out offocus (events $F_1$ and $F_2$), in which case the scientist willundercount by three or more stars (or if $N$ is less than 3, fail todetect any stars at all). Consider the three networks shown inFigure telescope-nets-figure.1. Which of these Bayesian networks are correct (but not necessarily efficient) representations of the preceding information?2. Which is the best network? Explain.3. Write out a conditional distribution for ${textbf{P}}(M_1N)$, for the case where $N{1,2,3}$ and $M_1{0,1,2,3,4}$. Each entry in the conditional distribution should be expressed as a function of the parameters $e$ and/or $f$.4. Suppose $M_11$ and $M_23$. What are the possible numbers of stars if you assume no prior constraint on the values of $N$?5. What is the most likely number of stars, given these observations? Explain how to compute this, or if it is not possible to compute, explain what additional information is needed and how it would affect the result.Exercise 15 Consider the network shown inFigure telescope-nets-figure(ii), and assume that thetwo telescopes work identically. $N{1,2,3}$ and$M_1,M_2{0,1,2,3,4}$, with the symbolic CPTs as describedin Exercise telescope-exercise. Using the enumerationalgorithm (Figure enumeration-algorithm onpage enumeration-algorithm), calculate the probability distribution${textbf{P}}(NM_12,M_22)$. Three possible networks for the telescope problem.Exercise 16 Consider the Bayes net shown in Figure politics-figure.1. Which of the following are asserted by the network structure? 1. ${textbf{P}}(B,I,M) = {textbf{P}}(B){textbf{P}}(I){textbf{P}}(M)$. 2. ${textbf{P}}(JG) = {textbf{P}}(JG,I)$. 3. ${textbf{P}}(MG,B,I) = {textbf{P}}(MG,B,I,J)$.2. Calculate the value of $P(b,i,lnot m,g,j)$.3. Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.4. A context-specific independence (see page CSI-page) allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in Figure politics-figure?5. Suppose we want to add the variable $P{PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add. A simple Bayes net with Boolean variables B = {BrokeElectionLaw}, I = {Indicted}, M = {PoliticallyMotivatedProsecutor}, G= {FoundGuilty}, J = {Jailed}.Exercise 17 Consider the Bayes net shown in Figure politics-figure.1. Which of the following are asserted by the network structure? 1. ${textbf{P}}(B,I,M) = {textbf{P}}(B){textbf{P}}(I){textbf{P}}(M)$. 2. ${textbf{P}}(JG) = {textbf{P}}(JG,I)$. 3. ${textbf{P}}(MG,B,I) = {textbf{P}}(MG,B,I,J)$.2. Calculate the value of $P(b,i,lnot m,g,j)$.3. Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.4. A context-specific independence (see page CSI-page) allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in Figure politics-figure?5. Suppose we want to add the variable $P{PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.Exercise 18 (VE-exercise) Consider the variable elimination algorithm inFigure elimination-ask-algorithm (page elimination-ask-algorithm).1. Section exact-inference-section applies variable elimination to the query $${textbf{P}}({Burglary}{JohnCalls}{true},{MaryCalls}{true}) .$$ Perform the calculations indicated and check that the answer is correct.2. Count the number of arithmetic operations performed, and compare it with the number performed by the enumeration algorithm.3. Suppose a network has the form of a chain: a sequence of Boolean variables $X_1,ldots, X_n$ where ${Parents}(X_i){X_{i-1}}$ for $i2,ldots,n$. What is the complexity of computing ${textbf{P}}(X_1X_n{true})$ using enumeration? Using variable elimination?4. Prove that the complexity of running variable elimination on a polytree network is linear in the size of the tree for any variable ordering consistent with the network structure.Exercise 19 (bn-complexity-exercise) Investigate the complexity of exact inferencein general Bayesian networks:1. Prove that any 3-SAT problem can be reduced to exact inference in a Bayesian network constructed to represent the particular problem and hence that exact inference is NP-hard. (Hint: Consider a network with one variable for each proposition symbol, one for each clause, and one for the conjunction of clauses.)2. The problem of counting the number of satisfying assignments for a 3-SAT problem is #P-complete. Show that exact inference is at least as hard as this.Exercise 20 (primitive-sampling-exercise) Consider the problem of generating arandom sample from a specified distribution on a single variable. Assumeyou have a random number generator that returns a random numberuniformly distributed between 0 and 1.1. Let $X$ be a discrete variable with $P(Xx_i)p_i$ for $i{1,ldots,k}$. The cumulative distribution of $X$ gives the probability that $X{x_1,ldots,x_j}$ for each possible $j$. (See also Appendix [math-appendix].) Explain how to calculate the cumulative distribution in $O(k)$ time and how to generate a single sample of $X$ from it. Can the latter be done in less than $O(k)$ time?2. Now suppose we want to generate $N$ samples of $X$, where $Ngg k$. Explain how to do this with an expected run time per sample that is constant (i.e., independent of $k$).3. Now consider a continuous-valued variable with a parameterized distribution (e.g., Gaussian). How can samples be generated from such a distribution?4. Suppose you want to query a continuous-valued variable and you are using a sampling algorithm such as LIKELIHOODWEIGHTING to do the inference. How would you have to modify the query-answering process?Exercise 21 Consider the query${textbf{P}}({Rain}{Sprinkler}{true},{WetGrass}{true})$in Figure rain-clustering-figure(a)(page rain-clustering-figure) and how Gibbs sampling can answer it.1. How many states does the Markov chain have?2. Calculate the transition matrix ${textbf{Q}}$ containing $q({textbf{y}}$ $rightarrow$ ${textbf{y}}')$ for all ${textbf{y}}$, ${textbf{y}}'$.3. What does ${textbf{ Q}}^2$, the square of the transition matrix, represent?4. What about ${textbf{Q}}^n$ as $nto infty$?5. Explain how to do probabilistic inference in Bayesian networks, assuming that ${textbf{Q}}^n$ is available. Is this a practical way to do inference?Exercise 22 (gibbs-proof-exercise) This exercise explores the stationarydistribution for Gibbs sampling methods.1. The convex composition $[alpha, q_1; 1-alpha, q_2]$ of $q_1$ and $q_2$ is a transition probability distribution that first chooses one of $q_1$ and $q_2$ with probabilities $alpha$ and $1-alpha$, respectively, and then applies whichever is chosen. Prove that if $q_1$ and $q_2$ are in detailed balance with $pi$, then their convex composition is also in detailed balance with $pi$. (Note: this result justifies a variant of GIBBS-ASK in which variables are chosen at random rather than sampled in a fixed sequence.)2. Prove that if each of $q_1$ and $q_2$ has $pi$ as its stationary distribution, then the sequential composition $q q_1 circ q_2$ also has $pi$ as its stationary distribution.Exercise 23 (MH-exercise) The Metropolis--Hastings algorithm is a member of the MCMC family; as such,it is designed to generate samples $textbf{x}$ (eventually) according to targetprobabilities $pi(textbf{x})$. (Typically we are interested in sampling from$pi(textbf{x})P(textbf{x}textbf{e})$.) Like simulated annealing,Metropolis–Hastings operates in two stages. First, it samples a newstate $textbf{x'}$ from a proposal distribution $q(textbf{x'}textbf{x})$, given the current state $textbf{x}$.Then, it probabilistically accepts or rejects $textbf{x'}$ according to the acceptance probability$$alpha(textbf{x'}textbf{x}) = min left(1,frac{pi(textbf{x'})q(textbf{x}textbf{x'})}{pi(textbf{x})q(textbf{x'}textbf{x})} right) .$$If the proposal is rejected, the state remains at $textbf{x}$.1. Consider an ordinary Gibbs sampling step for a specific variable $X_i$. Show that this step, considered as a proposal, is guaranteed to be accepted by Metropolis–Hastings. (Hence, Gibbs sampling is a special case of Metropolis–Hastings.)2. Show that the two-step process above, viewed as a transition probability distribution, is in detailed balance with $pi$.Exercise 24 (soccer-rpm-exercise) Three soccer teams $A$, $B$, and $C$, play eachother once. Each match is between two teams, and can be won, drawn, orlost. Each team has a fixed, unknown degree of quality—an integerranging from 0 to 3—and the outcome of a match depends probabilisticallyon the difference in quality between the two teams.1. Construct a relational probability model to describe this domain, and suggest numerical values for all the necessary probability distributions.2. Construct the equivalent Bayesian network for the three matches.3. Suppose that in the first two matches $A$ beats $B$ and draws with $C$. Using an exact inference algorithm of your choice, compute the posterior distribution for the outcome of the third match.4. Suppose there are $n$ teams in the league and we have the results for all but the last match. How does the complexity of predicting the last game vary with $n$?5. Investigate the application of MCMC to this problem. How quickly does it converge in practice and how well does it scale?Exercise 1 (state-augmentation-exercise)Show that any second-order Markovprocess can be rewritten as a first-order Markov process with anaugmented set of state variables. Can this always be doneparsimoniously, i.e., without increasing the number ofparameters needed to specify the transition model?Exercise 2 (markov-convergence-exercise) In this exercise, we examine whathappens to the probabilities in the umbrella world in the limit of longtime sequences.1. Suppose we observe an unending sequence of days on which the umbrella appears. Show that, as the days go by, the probability of rain on the current day increases monotonically toward a fixed point. Calculate this fixed point.2. Now consider forecasting further and further into the future, given just the first two umbrella observations. First, compute the probability $P(r_{2+k}|u_1,u_2)$ for $k=1 ldots 20$ and plot the results. You should see that the probability converges towards a fixed point. Prove that the exact value of this fixed point is 0.5.Exercise 3 (island-exercise) This exercise develops a space-efficient variant ofthe forward–backward algorithm described inFigure forward-backward-algorithm (page forward-backward-algorithm).We wish to compute $$textbf{P} (textbf{X}_k|textbf{e}_{1:t})$$ for$$k=1,ldots ,t$$. This will be done with a divide-and-conquerapproach.1. Suppose, for simplicity, that $t$ is odd, and let the halfway point be $h=(t+1)/2$. Show that $$textbf{P} (textbf{X}_k|textbf{e}_{1:t}) $$ can be computed for $k=1,ldots ,h$ given just the initial forward message $$textbf{f}_{1:0}$$, the backward message $$textbf{b}_{h+1:t}$$, and the evidence $$textbf{e}_{1:h}$$.2. Show a similar result for the second half of the sequence.3. Given the results of (a) and (b), a recursive divide-and-conquer algorithm can be constructed by first running forward along the sequence and then backward from the end, storing just the required messages at the middle and the ends. Then the algorithm is called on each half. Write out the algorithm in detail.4. Compute the time and space complexity of the algorithm as a function of $t$, the length of the sequence. How does this change if we divide the input into more than two pieces?Exercise 4 (flawed-viterbi-exercise) On page flawed-viterbi-page, we outlined a flawedprocedure for finding the most likely state sequence, given anobservation sequence. The procedure involves finding the most likelystate at each time step, using smoothing, and returning the sequencecomposed of these states. Show that, for some temporal probabilitymodels and observation sequences, this procedure returns an impossiblestate sequence (i.e., the posterior probability of the sequence iszero).Exercise 5 (hmm-likelihood-exercise) Equation (matrix-filtering-equation) describes thefiltering process for the matrix formulation of HMMs. Give a similarequation for the calculation of likelihoods, which was describedgenerically in Equation (forward-likelihood-equation).Exercise 6 Consider the vacuum worlds ofFigure vacuum-maze-ch4-figure (perfect sensing) andFigure vacuum-maze-hmm2-figure (noisy sensing). Supposethat the robot receives an observation sequence such that, with perfectsensing, there is exactly one possible location it could be in. Is thislocation necessarily the most probable location under noisy sensing forsufficiently small noise probability $epsilon$? Prove your claim orfind a counterexample.Exercise 7 (hmm-robust-exercise) In Section hmm-localization-section, the priordistribution over locations is uniform and the transition model assumesan equal probability of moving to any neighboring square. What if thoseassumptions are wrong? Suppose that the initial location is actuallychosen uniformly from the northwest quadrant of the room and the actionactually tends to move southeast[hmm-robot-southeast-page]. Keepingthe HMM model fixed, explore the effect on localization and pathaccuracy as the southeasterly tendency increases, for different valuesof $epsilon$.Exercise 8 (roomba-viterbi-exercise)Consider a version of the vacuum robot(page vacuum-maze-hmm2-figure) that has the policy of going straight for as longas it can; only when it encounters an obstacle does it change to a new(randomly selected) heading. To model this robot, each state in themodel consists of a (location, heading) pair. Implementthis model and see how well the Viterbi algorithm can track a robot withthis model. The robot’s policy is more constrained than the random-walkrobot; does that mean that predictions of the most likely path are moreaccurate?Exercise 9 We have described three policies for the vacuum robot: (1) a uniformrandom walk, (2) a bias for wandering southeast, as described inExercise hmm-robust-exercise, and (3) the policydescribed in Exercise roomba-viterbi-exercise. Supposean observer is given the observation sequence from a vacuum robot, butis not sure which of the three policies the robot is following. Whatapproach should the observer use to find the most likely path, given theobservations? Implement the approach and test it. How much does thelocalization accuracy suffer, compared to the case in which the observerknows which policy the robot is following?Exercise 10 This exercise is concerned with filtering in an environment with nolandmarks. Consider a vacuum robot in an empty room, represented by an$n times m$ rectangular grid. The robot’s location is hidden; the onlyevidence available to the observer is a noisy location sensor that givesan approximation to the robot’s location. If the robot is at location$(x, y)$ then with probability .1 the sensor gives the correct location,with probability .05 each it reports one of the 8 locations immediatelysurrounding $(x, y)$, with probability .025 each it reports one of the16 locations that surround those 8, and with the remaining probabilityof .1 it reports “no reading.” The robot’s policy is to pick a directionand follow it with probability .8 on each step; the robot switches to arandomly selected new heading with probability .2 (or with probability 1if it encounters a wall). Implement this as an HMM and do filtering totrack the robot. How accurately can we track the robot’s path?Exercise 11 This exercise is concerned with filtering in an environment with nolandmarks. Consider a vacuum robot in an empty room, represented by an$n times m$ rectangular grid. The robot’s location is hidden; the onlyevidence available to the observer is a noisy location sensor that givesan approximation to the robot’s location. If the robot is at location$(x, y)$ then with probability .1 the sensor gives the correct location,with probability .05 each it reports one of the 8 locations immediatelysurrounding $(x, y)$, with probability .025 each it reports one of the16 locations that surround those 8, and with the remaining probabilityof .1 it reports “no reading.” The robot’s policy is to pick a directionand follow it with probability .7 on each step; the robot switches to arandomly selected new heading with probability .3 (or with probability 1if it encounters a wall). Implement this as an HMM and do filtering totrack the robot. How accurately can we track the robot’s path? A Bayesian network representation of a switching Kalman filter. The switching variable $S_t$ is a discrete state variable whose value determines the transition model for the continuous state variables $textbf{X}_t$. For any discrete state $textit{i}$, the transition model $textbf{P}(textbf{X}_{t+1}|textbf{X}_t,S_t= i)$ is a linear Gaussian model, just as in a regular Kalman filter. The transition model for the discrete state, $textbf{P}(S_{t+1}|S_t)$, can be thought of as a matrix, as in a hidden Markov model.Exercise 12 (switching-kf-exercise) Often, we wish to monitor a continuous-statesystem whose behavior switches unpredictably among a set of $k$ distinct“modes.” For example, an aircraft trying to evade a missile can executea series of distinct maneuvers that the missile may attempt to track. ABayesian network representation of such a switching Kalmanfilter model is shown inFigure switching-kf-figure.1. Suppose that the discrete state $S_t$ has $k$ possible values and that the prior continuous state estimate $${textbf{P}}(textbf{X}_0)$$ is a multivariate Gaussian distribution. Show that the prediction $${textbf{P}}(textbf{X}_1)$$ is a mixture of Gaussians—that is, a weighted sum of Gaussians such that the weights sum to 1.2. Show that if the current continuous state estimate $${textbf{P}}(textbf{X}_t|textbf{e}_{1:t})$$ is a mixture of $m$ Gaussians, then in the general case the updated state estimate $${textbf{P}}(textbf{X}_{t+1}|textbf{e}_{1:t+1})$$ will be a mixture of $km$ Gaussians.3. What aspect of the temporal process do the weights in the Gaussian mixture represent?The results in (a) and (b) show that the representation of the posteriorgrows without limit even for switching Kalman filters, which are amongthe simplest hybrid dynamic models.Exercise 13 (kalman-update-exercise) Complete the missing step in the derivationof Equation (kalman-one-step-equation) onpage kalman-one-step-equation, the first update step for the one-dimensional Kalmanfilter.Exercise 14 (kalman-variance-exercise)Let us examine the behavior of the varianceupdate in Equation (kalman-univariate-equation)(page kalman-univariate-equation).1. Plot the value of $sigma_t^2$ as a function of $t$, given various values for $sigma_x^2$ and $sigma_z^2$.2. Show that the update has a fixed point $sigma^2$ such that $sigma_t^2 rightarrow sigma^2$ as $t rightarrow infty$, and calculate the value of $sigma^2$.3. Give a qualitative explanation for what happens as $sigma_x^2rightarrow 0$ and as $sigma_z^2rightarrow 0$.Exercise 15 (sleep1-exercise) A professor wants to know if students are gettingenough sleep. Each day, the professor observes whether the studentssleep in class, and whether they have red eyes. The professor has thefollowing domain theory:- The prior probability of getting enough sleep, with no observations, is 0.7.- The probability of getting enough sleep on night $t$ is 0.8 given that the student got enough sleep the previous night, and 0.3 if not.- The probability of having red eyes is 0.2 if the student got enough sleep, and 0.7 if not.- The probability of sleeping in class is 0.1 if the student got enough sleep, and 0.3 if not.Formulate this information as a dynamic Bayesian network that theprofessor could use to filter or predict from a sequence ofobservations. Then reformulate it as a hidden Markov model that has onlya single observation variable. Give the complete probability tables forthe model.Exercise 16 A professor wants to know if students are gettingenough sleep. Each day, the professor observes whether the studentssleep in class, and whether they have red eyes. The professor has thefollowing domain theory:- The prior probability of getting enough sleep, with no observations, is 0.7.- The probability of getting enough sleep on night $t$ is 0.8 given that the student got enough sleep the previous night, and 0.3 if not.- The probability of having red eyes is 0.2 if the student got enough sleep, and 0.7 if not.- The probability of sleeping in class is 0.1 if the student got enough sleep, and 0.3 if not.Formulate this information as a dynamic Bayesian network that theprofessor could use to filter or predict from a sequence ofobservations. Then reformulate it as a hidden Markov model that has onlya single observation variable. Give the complete probability tables forthe model.Exercise 17 For the DBN specified in Exercise sleep1-exercise andfor the evidence values$$textbf{e}_1 = notspace redspace eyes,space notspace sleepingspace inspace class$$$$textbf{e}_2 = redspace eyes,space notspace sleepingspace inspace class$$$$textbf{e}_3 = redspace eyes,space sleepingspace inspace class$$perform the following computations:1. State estimation: Compute $$P({EnoughSleep}_t | textbf{e}_{1:t})$$ for each of $t = 1,2,3$.2. Smoothing: Compute $$P({EnoughSleep}_t | textbf{e}_{1:3})$$ for each of $t = 1,2,3$.3. Compare the filtered and smoothed probabilities for $t=1$ and $t=2$.Exercise 18 Suppose that a particular student shows up with red eyes and sleeps inclass every day. Given the model described inExercise sleep1-exercise, explain why the probabilitythat the student had enough sleep the previous night converges to afixed point rather than continuing to go down as we gather more days ofevidence. What is the fixed point? Answer this both numerically (bycomputation) and analytically.Exercise 19 (battery-sequence-exercise) This exercise analyzes in more detail thepersistent-failure model for the battery sensor inFigure battery-persistence-figure(a)(page battery-persistence-figure).1. Figure battery-persistence-figure(b) stops at $t=32$. Describe qualitatively what should happen as $ttoinfty$ if the sensor continues to read 0.2. Suppose that the external temperature affects the battery sensor in such a way that transient failures become more likely as temperature increases. Show how to augment the DBN structure in Figure battery-persistence-figure(a), and explain any required changes to the CPTs.3. Given the new network structure, can battery readings be used by the robot to infer the current temperature?Exercise 20 (dbn-elimination-exercise) Consider applying the variable eliminationalgorithm to the umbrella DBN unrolled for three slices, where the queryis ${textbf{P}}(R_3|u_1,u_2,u_3)$. Show that the spacecomplexity of the algorithm—the size of the largest factor—is the same,regardless of whether the rain variables are eliminated in forward orbackward order.Exercise 1 (almanac-game) (Adapted from David Heckerman.) This exercise concernsthe Almanac Game, which is used bydecision analysts to calibrate numeric estimation. For each of thequestions that follow, give your best guess of the answer, that is, anumber that you think is as likely to be too high as it is to be toolow. Also give your guess at a 25th percentile estimate, that is, anumber that you think has a 25% chance of being too high, and a 75%chance of being too low. Do the same for the 75th percentile. (Thus, youshould give three estimates in all—low, median, and high—for eachquestion.)1. Number of passengers who flew between New York and Los Angeles in 1989.2. Population of Warsaw in 1992.3. Year in which Coronado discovered the Mississippi River.4. Number of votes received by Jimmy Carter in the 1976 presidential election.5. Age of the oldest living tree, as of 2002.6. Height of the Hoover Dam in feet.7. Number of eggs produced in Oregon in 1985.8. Number of Buddhists in the world in 1992.9. Number of deaths due to AIDS in the United States in 1981.10. Number of U.S. patents granted in 1901.The correct answers appear after the last exercise of this chapter. Fromthe point of view of decision analysis, the interesting thing is not howclose your median guesses came to the real answers, but rather how oftenthe real answer came within your 25% and 75% bounds. If it was abouthalf the time, then your bounds are accurate. But if you’re like mostpeople, you will be more sure of yourself than you should be, and fewerthan half the answers will fall within the bounds. With practice, youcan calibrate yourself to give realistic bounds, and thus be more usefulin supplying information for decision making. Try this second set ofquestions and see if there is any improvement:1. Year of birth of Zsa Zsa Gabor.2. Maximum distance from Mars to the sun in miles.3. Value in dollars of exports of wheat from the United States in 1992.4. Tons handled by the port of Honolulu in 1991.5. Annual salary in dollars of the governor of California in 1993.6. Population of San Diego in 1990.7. Year in which Roger Williams founded Providence, Rhode Island.8. Height of Mt. Kilimanjaro in feet.9. Length of the Brooklyn Bridge in feet.10. Number of deaths due to automobile accidents in the United States in 1992.Exercise 2 Chris considers four used cars before buying the one with maximumexpected utility. Pat considers ten cars and does the same. All otherthings being equal, which one is more likely to have the better car?Which is more likely to be disappointed with their car’s quality? By howmuch (in terms of standard deviations of expected quality)?Exercise 3 Chris considers five used cars before buying the one with maximumexpected utility. Pat considers eleven cars and does the same. All otherthings being equal, which one is more likely to have the better car?Which is more likely to be disappointed with their car’s quality? By howmuch (in terms of standard deviations of expected quality)?Exercise 4 (St-Petersburg-exercise) In 1713, Nicolas Bernoulli stated a puzzle,now called the St. Petersburg paradox, which works as follows. You havethe opportunity to play a game in which a fair coin is tossed repeatedlyuntil it comes up heads. If the first heads appears on the $n$th toss,you win $2^n$ dollars.1. Show that the expected monetary value of this game is infinite.2. How much would you, personally, pay to play the game?3. Nicolas’s cousin Daniel Bernoulli resolved the apparent paradox in 1738 by suggesting that the utility of money is measured on a logarithmic scale (i.e., $U(S_{n}) = alog_2 n +b$, where $S_n$ is the state of having $n$). What is the expected utility of the game under this assumption?4. What is the maximum amount that it would be rational to pay to play the game, assuming that one’s initial wealth is $k$?Exercise 5 Write a computer program to automate the process inExercise assessment-exercise. Try your program out onseveral people of different net worth and political outlook. Comment onthe consistency of your results, both for an individual and acrossindividuals.Exercise 6 (surprise-candy-exercise) The Surprise Candy Company makes candy intwo flavors: 75% are strawberry flavor and 25% are anchovy flavor. Eachnew piece of candy starts out with a round shape; as it moves along theproduction line, a machine randomly selects a certain percentage to betrimmed into a square; then, each piece is wrapped in a wrapper whosecolor is chosen randomly to be red or brown. 70% of the strawberrycandies are round and 70% have a red wrapper, while 90% of the anchovycandies are square and 90% have a brown wrapper. All candies are soldindividually in sealed, identical, black boxes.Now you, the customer, have just bought a Surprise candy at the storebut have not yet opened the box. Consider the three Bayes nets inFigure 3candy-figure.1. Which network(s) can correctly represent ${textbf{P}}(Flavor,Wrapper,Shape)$?2. Which network is the best representation for this problem?3. Does network (i) assert that ${textbf{P}}(Wrapper|Shape){textbf{P}}(Wrapper)$?4. What is the probability that your candy has a red wrapper?5. In the box is a round candy with a red wrapper. What is the probability that its flavor is strawberry?6. A unwrapped strawberry candy is worth $s$ on the open market and an unwrapped anchovy candy is worth $a$. Write an expression for the value of an unopened candy box.7. A new law prohibits trading of unwrapped candies, but it is still legal to trade wrapped candies (out of the box). Is an unopened candy box now worth more than less than, or the same as before? Three proposed Bayes nets for the Surprise Candy problem Exercise 7 (surprise-candy-exercise) The Surprise Candy Company makes candy intwo flavors: 70% are strawberry flavor and 30% are anchovy flavor. Eachnew piece of candy starts out with a round shape; as it moves along theproduction line, a machine randomly selects a certain percentage to betrimmed into a square; then, each piece is wrapped in a wrapper whosecolor is chosen randomly to be red or brown. 80% of the strawberrycandies are round and 80% have a red wrapper, while 90% of the anchovycandies are square and 90% have a brown wrapper. All candies are soldindividually in sealed, identical, black boxes.Now you, the customer, have just bought a Surprise candy at the storebut have not yet opened the box. Consider the three Bayes nets inFigure 3candy-figure.1. Which network(s) can correctly represent ${textbf{P}}(Flavor,Wrapper,Shape)$?2. Which network is the best representation for this problem?3. Does network (i) assert that ${textbf{P}}(Wrapper|Shape){textbf{P}}(Wrapper)$?4. What is the probability that your candy has a red wrapper?5. In the box is a round candy with a red wrapper. What is the probability that its flavor is strawberry?6. A unwrapped strawberry candy is worth $s$ on the open market and an unwrapped anchovy candy is worth $a$. Write an expression for the value of an unopened candy box.7. A new law prohibits trading of unwrapped candies, but it is still legal to trade wrapped candies (out of the box). Is an unopened candy box now worth more than less than, or the same as before?Exercise 8 Prove that the judgments $B succ A$ and $C succ D$ in the Allaisparadox (page allais-page) violate the axiom of substitutability.Exercise 9 Consider the Allais paradox described on page allais-page: an agentwho prefers $B$ over $A$ (taking the sure thing), and $C$ over $D$(taking the higher EMV) is not acting rationally, according to utilitytheory. Do you think this indicates a problem for the agent, a problemfor the theory, or no problem at all? Explain.Exercise 10 Tickets to a lottery cost 1. There are two possible prizes:a 10 payoff with probability 1/50, and a 1,000,000 payoff withprobability 1/2,000,000. What is the expected monetary value of alottery ticket? When (if ever) is it rational to buy a ticket? Beprecise—show an equation involving utilities. You may assume currentwealth of $k$ and that $U(S_k)=0$. You may also assume that$U(S_{k+{10}}) = {10}times U(S_{k+1})$, but you may not make anyassumptions about $U(S_{k+1,{000},{000}})$. Sociological studies showthat people with lower income buy a disproportionate number of lotterytickets. Do you think this is because they are worse decision makers orbecause they have a different utility function? Consider the value ofcontemplating the possibility of winning the lottery versus the value ofcontemplating becoming an action hero while watching an adventure movie.Exercise 11 (assessment-exercise) Assess your own utility for different incrementalamounts of money by running a series of preference tests between somedefinite amount $M_1$ and a lottery $[p,M_2; (1-p), 0]$. Choosedifferent values of $M_1$ and $M_2$, and vary $p$ until you areindifferent between the two choices. Plot the resulting utilityfunction.Exercise 12 How much is a micromort worth to you? Devise a protocol to determinethis. Ask questions based both on paying to avoid risk and being paid toaccept risk.Exercise 13 (kmax-exercise) Let continuous variables $X_1,ldots,X_k$ beindependently distributed according to the same probability densityfunction $f(x)$. Prove that the density function for$max{X_1,ldots,X_k}$ is given by $kf(x)(F(x))^{k-1}$, where $F$ isthe cumulative distribution for $f$.Exercise 14 Economists often make use of an exponential utility function for money:$U(x) = -e^{-x/R}$, where $R$ is a positive constant representing anindividual’s risk tolerance. Risk tolerance reflects how likely anindividual is to accept a lottery with a particular expected monetaryvalue (EMV) versus some certain payoff. As $R$ (which is measured in thesame units as $x$) becomes larger, the individual becomes lessrisk-averse.1. Assume Mary has an exponential utility function with $$R = $500$$. Mary is given the choice between receiving $$$500$$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning $5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.2. Consider the choice between receiving $$$100$$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning $$$500$$ and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to write a short program to help you solve this problem.)Exercise 15 Economists often make use of an exponential utility function for money:$U(x) = -e^{-x/R}$, where $R$ is a positive constant representing anindividual’s risk tolerance. Risk tolerance reflects how likely anindividual is to accept a lottery with a particular expected monetaryvalue (EMV) versus some certain payoff. As $R$ (which is measured in thesame units as $x$) becomes larger, the individual becomes lessrisk-averse.1. Assume Mary has an exponential utility function with $R = $400$. Mary is given the choice between receiving $$$400$$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning $5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.2. Consider the choice between receiving $$$100$$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning $500 and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to write a short program to help you solve this problem.)Exercise 16 Alex is given the choice between two games. In Game 1, a fair coin isflipped and if it comes up heads, Alex receives $$$100$$. If the coin comesup tails, Alex receives nothing. In Game 2, a fair coin is flippedtwice. Each time the coin comes up heads, Alex receives $$$50$$, and Alexreceives nothing for each coin flip that comes up tails. Assuming thatAlex has a monotonically increasing utility function for money in therange [$0, $100], show mathematically that if Alex prefers Game 2 toGame 1, then Alex is risk averse (at least with respect to this range ofmonetary amounts).Show that if $X_1$ and $X_2$ are preferentially independent of $X_3$,and $X_2$ and $X_3$ are preferentially independent of $X_1$, then $X_3$and $X_1$ are preferentially independent of $X_2$.Exercise 17 (airport-au-id-exercise) Repeat Exercise airport-id-exercise, using the action-utilityrepresentation shown in Figure airport-au-id-figure.Exercise 18 For either of the airport-siting diagrams from Exercisesairport-id-exercise and airport-au-id-exercise, to whichconditional probability table entry is the utility most sensitive, giventhe available evidence?Exercise 19 Modify and extend the Bayesian network code in the code repository toprovide for creation and evaluation of decision networks and thecalculation of information value.Exercise 20 Consider a student who has the choice to buy or not buy a textbook for acourse. We’ll model this as a decision problem with one Boolean decisionnode, $B$, indicating whether the agent chooses to buy the book, and twoBoolean chance nodes, $M$, indicating whether the student has masteredthe material in the book, and $P$, indicating whether the student passesthe course. Of course, there is also a utility node, $U$. A certainstudent, Sam, has an additive utility function: 0 for not buying thebook and -$100 for buying it; and $2000 for passing the course and 0for not passing. Sam’s conditional probability estimates are as follows:$$begin{array}{ll}P(p|b,m) = 0.9 &amp; P(m|b) = 0.9 P(p|b, lnot m) = 0.5 &amp; P(m|lnot b) = 0.7 P(p|lnot b, m) = 0.8 &amp; P(p|lnot b, lnot m) = 0.3 &amp; end{array}$$You might think that $P$ would be independent of $B$ given$M$, But this course has an open-book final—so having the book helps.1. Draw the decision network for this problem.2. Compute the expected utility of buying the book and of not buying it.3. What should Sam do?Exercise 21 (airport-id-exercise) This exercise completes the analysis of theairport-siting problem in Figure airport-id-figure.1. Provide reasonable variable domains, probabilities, and utilities for the network, assuming that there are three possible sites.2. Solve the decision problem.3. What happens if changes in technology mean that each aircraft generates half the noise?4. What if noise avoidance becomes three times more important?5. Calculate the VPI for ${AirTraffic}$, ${Litigation}$, and ${Construction}$ in your model.Exercise 22 (car-vpi-exercise) (Adapted from Pearl [Pearl:1988].) A used-carbuyer can decide to carry out various tests with various costs (e.g.,kick the tires, take the car to a qualified mechanic) and then,depending on the outcome of the tests, decide which car to buy. We willassume that the buyer is deciding whether to buy car $c_1$, that thereis time to carry out at most one test, and that $t_1$ is the test of$c_1$ and costs $50.A car can be in good shape (quality $$q^+$$) or bad shape (quality $q^-$),and the tests might help indicate what shape the car is in. Car $c_1$costs $1,500, and its market value is $$$2,000$$ if it is in good shape; ifnot, $$$700$$ in repairs will be needed to make it in good shape. The buyer’sestimate is that $c_1$ has a 70% chance of being in good shape.1. Draw the decision network that represents this problem.2. Calculate the expected net gain from buying $c_1$, given no test.3. Tests can be described by the probability that the car will pass or fail the test given that the car is in good or bad shape. We have the following information: $$P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$$ $$P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$$ Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.4. Calculate the optimal decisions given either a pass or a fail, and their expected utilities.5. Calculate the value of information of the test, and derive an optimal conditional plan for the buyer.Exercise 23 (nonnegative-VPI-exercise) Recall the definition of value ofinformation in Section VPI-section.1. Prove that the value of information is nonnegative and order independent.2. Explain why it is that some people would prefer not to get some information—for example, not wanting to know the sex of their baby when an ultrasound is done.3. A function $f$ on sets is submodular if, for any element $x$ and any sets $A$ and $B$ such that $Asubseteq B$, adding $x$ to $A$ gives a greater increase in $f$ than adding $x$ to $B$: $$Asubseteq B Rightarrow (f(A cup {x}) - f(A)) geq (f(Bcup {x}) - f(B)) .$$ Submodularity captures the intuitive notion of diminishing returns. Is the value of information, viewed as a function $f$ on sets of possible observations, submodular? Prove this or find a counterexample.Exercise 1 (mdp-model-exercise) For the $4times 3$ world shown inFigure sequential-decision-world-figure., calculatewhich squares can be reached from (1,1) by the action sequence$[{Up},{Up},{Right},{Right},{Right}]$ and with whatprobabilities. Explain how this computation is related to the predictiontask (see Section general-filtering-section for ahidden Markov model.Exercise 2 (mdp-model-exercise) For the $4times 3$ world shown inFigure sequential-decision-world-figure, calculatewhich squares can be reached from (1,1) by the action sequence$[{Right},{Right},{Right},{Up},{Up}]$ and with whatprobabilities. Explain how this computation is related to the predictiontask (see Section general-filtering-section) for ahidden Markov model.Exercise 3 Select a specific member of the set of policies that are optimal for$R(s)&gt;0$ as shown inFigure sequential-decision-policies-figure(b), andcalculate the fraction of time the agent spends in each state, in thelimit, if the policy is executed forever. (Hint:Construct the state-to-state transition probability matrix correspondingto the policy and seeExercise markov-convergence-exercise.)Exercise 4 (nonseparable-exercise)Suppose that we define the utility of a statesequence to be the maximum reward obtained in any statein the sequence. Show that this utility function does not result instationary preferences between state sequences. Is it still possible todefine a utility function on states such that MEU decision making givesoptimal behavior?Exercise 5 Can any finite search problem be translated exactly into a Markovdecision problem such that an optimal solution of the latter is also anoptimal solution of the former? If so, explain preciselyhow to translate the problem and how to translate the solution back; ifnot, explain precisely why not (i.e., give acounterexample).Exercise 6 (reward-equivalence-exercise) Sometimes MDPs are formulated with areward function $R(s,a)$ that depends on the action taken or with areward function $R(s,a,s')$ that also depends on the outcome state.1. Write the Bellman equations for these formulations.2. Show how an MDP with reward function $R(s,a,s')$ can be transformed into a different MDP with reward function $R(s,a)$, such that optimal policies in the new MDP correspond exactly to optimal policies in the original MDP.3. Now do the same to convert MDPs with $R(s,a)$ into MDPs with $R(s)$.Exercise 7 (threshold-cost-exercise) For the environment shown inFigure sequential-decision-world-figure, find all thethreshold values for $R(s)$ such that the optimal policy changes whenthe threshold is crossed. You will need a way to calculate the optimalpolicy and its value for fixed $R(s)$. (Hint: Prove thatthe value of any fixed policy varies linearly with $R(s)$.)Exercise 8 (vi-contraction-exercise) Equation (vi-contraction-equation) onpage vi-contraction-equation states that the Bellman operator is a contraction.1. Show that, for any functions $f$ and $g$, $$|max_a f(a) - max_a g(a)| leq max_a |f(a) - g(a)| .$$2. Write out an expression for $$|(B,U_i - B,U'_i)(s)|$$ and then apply the result from (1) to complete the proof that the Bellman operator is a contraction.Exercise 9 This exercise considers two-player MDPs that correspond to zero-sum,turn-taking games like those inChapter game-playing-chapter. Let the players be $A$and $B$, and let $R(s)$ be the reward for player $A$ in state $s$. (Thereward for $B$ is always equal and opposite.)1. Let $U_A(s)$ be the utility of state $s$ when it is $A$’s turn to move in $s$, and let $U_B(s)$ be the utility of state $s$ when it is $B$’s turn to move in $s$. All rewards and utilities are calculated from $A$’s point of view (just as in a minimax game tree). Write down Bellman equations defining $U_A(s)$ and $U_B(s)$.2. Explain how to do two-player value iteration with these equations, and define a suitable termination criterion.3. Consider the game described in Figure line-game4-figure on page line-game4-figure. Draw the state space (rather than the game tree), showing the moves by $A$ as solid lines and moves by $B$ as dashed lines. Mark each state with $R(s)$. You will find it helpful to arrange the states $(s_A,s_B)$ on a two-dimensional grid, using $s_A$ and $s_B$ as “coordinates.”4. Now apply two-player value iteration to solve this game, and derive the optimal policy. (a) $3 times 3$ world for Exercise 3x3-mdp-exercise. The reward for each state is indicated. The upper right square is a terminal state. (b) $101 times 3$ world for Exercise 101x3-mdp-exercise (omitting 93 identical columns in the middle). The start state has reward 0. Exercise 10 (3x3-mdp-exercise) Consider the $3 times 3$ world shown inFigure grid-mdp-figure(a). The transition model is thesame as in the $4times 3$Figure sequential-decision-world-figure: 80% of thetime the agent goes in the direction it selects; the rest of the time itmoves at right angles to the intended direction.Implement value iteration for this world for each value of $r$ below.Use discounted rewards with a discount factor of 0.99. Show the policyobtained in each case. Explain intuitively why the value of $r$ leads toeach policy.1. $r = -100$2. $r = -3$3. $r = 0$4. $r = +3$Exercise 11 (101x3-mdp-exercise) Consider the $101 times 3$ world shown inFigure grid-mdp-figure(b). In the start state the agenthas a choice of two deterministic actions, Up orDown, but in the other states the agent has onedeterministic action, Right. Assuming a discounted rewardfunction, for what values of the discount $gamma$ should the agentchoose Up and for which Down? Compute theutility of each action as a function of $gamma$. (Note that this simpleexample actually reflects many real-world situations in which one mustweigh the value of an immediate action versus the potential continuallong-term consequences, such as choosing to dump pollutants into alake.)Exercise 12 Consider an undiscounted MDP having three states, (1, 2, 3), withrewards $-1$, $-2$, $0$, respectively. State 3 is a terminal state. Instates 1 and 2 there are two possible actions: $a$ and $b$. Thetransition model is as follows:- In state 1, action $a$ moves the agent to state 2 with probability 0.8 and makes the agent stay put with probability 0.2.- In state 2, action $a$ moves the agent to state 1 with probability 0.8 and makes the agent stay put with probability 0.2.- In either state 1 or state 2, action $b$ moves the agent to state 3 with probability 0.1 and makes the agent stay put with probability 0.9.Answer the following questions:1. What can be determined qualitatively about the optimal policy in states 1 and 2?2. Apply policy iteration, showing each step in full, to determine the optimal policy and the values of states 1 and 2. Assume that the initial policy has action $b$ in both states.3. What happens to policy iteration if the initial policy has action $a$ in both states? Does discounting help? Does the optimal policy depend on the discount factor?Exercise 13 Consider the $4times 3$ world shown inFigure sequential-decision-world-figure.1. Implement an environment simulator for this environment, such that the specific geography of the environment is easily altered. Some code for doing this is already in the online code repository.2. Create an agent that uses policy iteration, and measure its performance in the environment simulator from various starting states. Perform several experiments from each starting state, and compare the average total reward received per run with the utility of the state, as determined by your algorithm.3. Experiment with increasing the size of the environment. How does the run time for policy iteration vary with the size of the environment?Exercise 14 (policy-loss-exercise) How can the value determination algorithm beused to calculate the expected loss experienced by an agent using agiven set of utility estimates ${U}$ and an estimatedmodel ${P}$, compared with an agent using correct values?Exercise 15 (4x3-pomdp-exercise) Let the initial belief state $b_0$ for the$4times 3$ POMDP on page 4x3-pomdp-page be the uniform distributionover the nonterminal states, i.e.,$&lt; frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},0,0 &gt;$.Calculate the exact belief state $b_1$ after the agent moves and itssensor reports 1 adjacent wall. Also calculate $b_2$ assuming that thesame thing happens again.Exercise 16 What is the time complexity of $d$ steps of POMDP value iteration for asensorless environment?Exercise 17 (2state-pomdp-exercise) Consider a version of the two-state POMDP onpage 2state-pomdp-page in which the sensor is 90% reliable in state 0 butprovides no information in state 1 (that is, it reports 0 or 1 withequal probability). Analyze, either qualitatively or quantitatively, theutility function and the optimal policy for this problem.Exercise 18 (dominant-equilibrium-exercise) Show that a dominant strategyequilibrium is a Nash equilibrium, but not vice versa.Exercise 19 In the children’s game of rock–paper–scissors each player reveals at thesame time a choice of rock, paper, or scissors. Paper wraps rock, rockblunts scissors, and scissors cut paper. In the extended versionrock–paper–scissors–fire–water, fire beats rock, paper, and scissors;rock, paper, and scissors beat water; and water beats fire. Write outthe payoff matrix and find a mixed-strategy solution to this game.Exercise 20 Solve the game of three-finger Morra.Exercise 21 In the Prisoner’s Dilemma, consider the case where aftereach round, Alice and Bob have probability $X$ meeting again. Supposeboth players choose the perpetual punishment strategy (where each willchoose ${refuse}$ unless the other player has ever played${testify}$). Assume neither player has played ${testify}$ thus far.What is the expected future total payoff for choosing to ${testify}$versus ${refuse}$ when $X = .2$? How about when $X = .05$? For whatvalue of $X$ is the expected future total payoff the same whether onechooses to ${testify}$ or ${refuse}$ in the current round?Exercise 22 The following payoff matrix, from @Blinder:1983 by way of Bernstein:1996, shows a game betweenpoliticians and the Federal Reserve.| | Fed: contract | Fed: do nothing | Fed: expand || --- | --- | --- | --- || **Pol: contract** | $F=7, P=1$ | $F=9,P=4$ | $F=6,P=6$ || **Pol: do nothing** | $F=8, P=2$ | $F=5,P=5$ | $F=4,P=9$ || **Pol: expand** | $F=3, P=3$ | $F=2,P=7$ | $F=1,P=8$ |Politicians can expand or contract fiscal policy, while the Fed canexpand or contract monetary policy. (And of course either side canchoose to do nothing.) Each side also has preferences for who should dowhat—neither side wants to look like the bad guys. The payoffs shown aresimply the rank orderings: 9 for first choice through 1 for last choice.Find the Nash equilibrium of the game in pure strategies. Is this aPareto-optimal solution? You might wish to analyze the policies ofrecent administrations in this light.Exercise 23 A Dutch auction is similar in an English auction, but rather thanstarting the bidding at a low price and increasing, in a Dutch auctionthe seller starts at a high price and gradually lowers the price untilsome buyer is willing to accept that price. (If multiple bidders acceptthe price, one is arbitrarily chosen as the winner.) More formally, theseller begins with a price $p$ and gradually lowers $p$ by increments of$d$ until at least one buyer accepts the price. Assuming all bidders actrationally, is it true that for arbitrarily small $d$, a Dutch auctionwill always result in the bidder with the highest value for the itemobtaining the item? If so, show mathematically why. If not, explain howit may be possible for the bidder with highest value for the item not toobtain it.Exercise 24 Imagine an auction mechanism that is just like an ascending-bid auction,except that at the end, the winning bidder, the one who bid $b_{max}$,pays only $b_{max}/2$ rather than $b_{max}$. Assuming all agents arerational, what is the expected revenue to the auctioneer for thismechanism, compared with a standard ascending-bid auction?Exercise 25 Teams in the National Hockey League historically received 2 points forwinning a game and 0 for losing. If the game is tied, an overtime periodis played; if nobody wins in overtime, the game is a tie and each teamgets 1 point. But league officials felt that teams were playing tooconservatively in overtime (to avoid a loss), and it would be moreexciting if overtime produced a winner. So in 1999 the officialsexperimented in mechanism design: the rules were changed, giving a teamthat loses in overtime 1 point, not 0. It is still 2 points for a winand 1 for a tie. 1. Was hockey a zero-sum game before the rule change? After?2. Suppose that at a certain time $t$ in a game, the home team has probability $p$ of winning in regulation time, probability $0.78-p$ of losing, and probability 0.22 of going into overtime, where they have probability $q$ of winning, $.9-q$ of losing, and .1 of tying. Give equations for the expected value for the home and visiting teams.3. Imagine that it were legal and ethical for the two teams to enter into a pact where they agree that they will skate to a tie in regulation time, and then both try in earnest to win in overtime. Under what conditions, in terms of $p$ and $q$, would it be rational for both teams to agree to this pact?4. Longley+Sankaran:2005 report that since the rule change, the percentage of games with a winner in overtime went up 18.2%, as desired, but the percentage of overtime games also went up 3.6%. What does that suggest about possible collusion or conservative play after the rule change?Exercise 1 (infant-language-exercise) Consider the problem faced by an infantlearning to speak and understand a language. Explain how this processfits into the general learning model. Describe the percepts and actionsof the infant, and the types of learning the infant must do. Describethe subfunctions the infant is trying to learn in terms of inputs andoutputs, and available example data.Exercise 2 Repeat Exercise infant-language-exercise for the caseof learning to play tennis (or some other sport with which you arefamiliar). Is this supervised learning or reinforcement learning?Exercise 3 Draw a decision tree for the problem of deciding whether to move forwardat a road intersection, given that the light has just turned green.Exercise 4 We never test the same attribute twice along one path in a decisiontree. Why not?Exercise 5 Suppose we generate a training set from a decision tree and then applydecision-tree learning to that training set. Is it the case that thelearning algorithm will eventually return the correct tree as thetraining-set size goes to infinity? Why or why not?Exercise 6 (leaf-classification-exercise) In the recursive construction ofdecision trees, it sometimes happens that a mixed set of positive andnegative examples remains at a leaf node, even after all the attributeshave been used. Suppose that we have $p$ positive examples and $n$negative examples.1. Show that the solution used by DECISION-TREE-LEARNING, which picks the majority classification, minimizes the absolute error over the set of examples at the leaf.2. Show that the class probability $p/(p+n)$ minimizes the sum of squared errors.Exercise 7 (nonnegative-gain-exercise) [nonnegative-gain-exercise]Suppose that an attribute splits the set ofexamples $E$ into subsets $E_k$ and that each subset has $p_k$positive examples and $n_k$ negative examples. Show that theattribute has strictly positive information gain unless the ratio$p_k/(p_k+n_k)$ is the same for all $k$.Exercise 8 Consider the following data set comprised of three binary inputattributes ($A_1, A_2$, and $A_3$) and one binary output:| $quad textbf{Example}$ | $quad A_1quad$ | $quad A_2quad$ | $quad A_3quad$ | $quad Outputspace y$ || --- | --- | --- | --- | --- || $textbf{x}_1$ | 1 | 0 | 0 | 0 || $textbf{x}_2$ | 1 | 0 | 1 | 0 || $textbf{x}_3$ | 0 | 1 | 0 | 0 || $textbf{x}_4$ | 1 | 1 | 1 | 1 || $textbf{x}_5$ | 1 | 1 | 0 | 1 |Use the algorithm in Figure DTL-algorithm(page DTL-algorithm) to learn a decision tree for these data. Show thecomputations made to determine the attribute to split at each node.Exercise 9 Construct a data set (set of examples with attributes andclassifications) that would cause the decision-tree learning algorithmto find a non-minimal-sized tree. Show the tree constructed by thealgorithm and the minimal-sized tree that you can generate by hand.Exercise 10 A decision graph is a generalization of a decision treethat allows nodes (i.e., attributes used for splits) to have multipleparents, rather than just a single parent. The resulting graph muststill be acyclic. Now, consider the XOR function of threebinary input attributes, which produces the value 1 if and only if anodd number of the three input attributes has value 1.1. Draw a minimal-sized decision tree for the three-input XOR function.2. Draw a minimal-sized decision graph for the three-input XOR function.Exercise 11 (pruning-DTL-exercise) This exercise considers $chi^2$ pruning ofdecision trees (Section chi-squared-section.1. Create a data set with two input attributes, such that the information gain at the root of the tree for both attributes is zero, but there is a decision tree of depth 2 that is consistent with all the data. What would $chi^2$ pruning do on this data set if applied bottom up? If applied top down?2. Modify DECISION-TREE-LEARNING to include $chi^2$-pruning. You might wish to consult Quinlan [Quinlan:1986] or [Kearns+Mansour:1998] for details.Exercise 12 (missing-value-DTL-exercise) The standard DECISION-TREE-LEARNING algorithm described in thechapter does not handle cases in which some examples have missingattribute values.1. First, we need to find a way to classify such examples, given a decision tree that includes tests on the attributes for which values can be missing. Suppose that an example $textbf{x}$ has a missing value for attribute $A$ and that the decision tree tests for $A$ at a node that $textbf{x}$ reaches. One way to handle this case is to pretend that the example has all possible values for the attribute, but to weight each value according to its frequency among all of the examples that reach that node in the decision tree. The classification algorithm should follow all branches at any node for which a value is missing and should multiply the weights along each path. Write a modified classification algorithm for decision trees that has this behavior.2. Now modify the information-gain calculation so that in any given collection of examples $C$ at a given node in the tree during the construction process, the examples with missing values for any of the remaining attributes are given “as-if” values according to the frequencies of those values in the set $C$.Exercise 13 (gain-ratio-DTL-exercise) InSection broadening-decision-tree-section, we noted thatattributes with many different possible values can cause problems withthe gain measure. Such attributes tend to split the examples intonumerous small classes or even singleton classes, thereby appearing tobe highly relevant according to the gain measure. Thegain-ratio criterion selects attributesaccording to the ratio between their gain and their intrinsicinformation content—that is, the amount of information contained in theanswer to the question, “What is the value of this attribute?” Thegain-ratio criterion therefore tries to measure how efficiently anattribute provides information on the correct classification of anexample. Write a mathematical expression for the information content ofan attribute, and implement the gain ratio criterion in DECISION-TREE-LEARNING.Exercise 14 Suppose you are running a learning experiment on a new algorithm forBoolean classification. You have a data set consisting of 100 positiveand 100 negative examples. You plan to use leave-one-outcross-validation and compare your algorithm to a baseline function, asimple majority classifier. (A majority classifier is given a set oftraining data and then always outputs the class that is in the majorityin the training set, regardless of the input.) You expect the majorityclassifier to score about 50% on leave-one-out cross-validation, but toyour surprise, it scores zero every time. Can you explain why?Exercise 15 Suppose that a learning algorithm is trying to find a consistenthypothesis when the classifications of examples are actually random.There are $n$ Boolean attributes, and examples are drawn uniformly fromthe set of $2^n$ possible examples. Calculate the number of examplesrequired before the probability of finding a contradiction in the datareaches 0.5.Exercise 16 Construct a decision list to classify the data below.Select tests to be as small as possible (in terms of attributes),breaking ties among tests with the same number of attributes byselecting the one that classifies the greatest number of examplescorrectly. If multiple tests have the same number of attributes andclassify the same number of examples, then break the tie usingattributes with lower index numbers (e.g., select $A_1$ over $A_2$).| | $quad A_1quad$ | $quad A_2quad$ | $quad A_3quad$ | $quad A_yquad$ | $quad yquad$ || --- | --- | --- | --- | --- | --- || $textbf{x}_1$ | 1 | 0 | 0 | 0 | 1 || $textbf{x}_2$ | 1 | 0 | 1 | 1 | 1 || $textbf{x}_3$ | 0 | 1 | 0 | 0 | 1 || $textbf{x}_4$ | 0 | 1 | 1 | 0 | 0 || $textbf{x}_5$ | 1 | 1 | 0 | 1 | 1 || $textbf{x}_6$ | 0 | 1 | 0 | 1 | 0 || $textbf{x}_7$ | 0 | 0 | 1 | 1 | 1 || $textbf{x}_8$ | 0 | 0 | 1 | 0 | 0 |Exercise 17 Prove that a decision list can represent the same function as a decisiontree while using at most as many rules as there are leaves in thedecision tree for that function. Give an example of a functionrepresented by a decision list using strictly fewer rules than thenumber of leaves in a minimal-sized decision tree for that samefunction.Exercise 18 (DL-expressivity-exercise) This exercise concerns the expressiveness ofdecision lists (Section learning-theory-section).1. Show that decision lists can represent any Boolean function, if the size of the tests is not limited.2. Show that if the tests can contain at most $k$ literals each, then decision lists can represent any function that can be represented by a decision tree of depth $k$.Exercise 19 (knn-mean-mode) Suppose a $7$-nearest-neighbors regression searchreturns $ {7, 6, 8, 4, 7, 11, 100} $ as the 7 nearest $y$ values for agiven $x$ value. What is the value of $hat{y}$ that minimizes the $L_1$loss function on this data? There is a common name in statistics forthis value as a function of the $y$ values; what is it? Answer the sametwo questions for the $L_2$ loss function.Exercise 20 (knn-mean-mode) Suppose a $7$-nearest-neighbors regression searchreturns $ {4, 2, 8, 4, 9, 11, 100} $ as the 7 nearest $y$ values for agiven $x$ value. What is the value of $hat{y}$ that minimizes the $L_1$loss function on this data? There is a common name in statistics forthis value as a function of the $y$ values; what is it? Answer the sametwo questions for the $L_2$ loss function.Exercise 21 (svm-ellipse-exercise) Figure &lt;ahref=""#"&gt;kernel-machine-figure&lt;/a&gt;showed how a circle at the origin can be linearly separated by mappingfrom the features $(x_1, x_2)$ to the two dimensions $(x_1^2, x_2^2)$.But what if the circle is not located at the origin? What if it is anellipse, not a circle? The general equation for a circle (and hence thedecision boundary) is $(x_1-a)^2 +(x_2-b)^2 - r^20$, and the general equation for an ellipse is$c(x_1-a)^2 + d(x_2-b)^2 - 1 0$.1. Expand out the equation for the circle and show what the weights $w_i$ would be for the decision boundary in the four-dimensional feature space $(x_1, x_2, x_1^2, x_2^2)$. Explain why this means that any circle is linearly separable in this space.2. Do the same for ellipses in the five-dimensional feature space $(x_1, x_2, x_1^2, x_2^2, x_1 x_2)$.Exercise 22 (svm-exercise) Construct a support vector machine that computes thexor function. Use values of +1 and –1 (instead of 1 and 0)for both inputs and outputs, so that an example looks like $([-1, 1],1)$ or $([-1, -1], -1)$. Map the input $[x_1,x_2]$ into a spaceconsisting of $x_1$ and $x_1,x_2$. Draw the four input points in thisspace, and the maximal margin separator. What is the margin? Now drawthe separating line back in the original Euclidean input space.Exercise 23 (ensemble-error-exercise) Consider an ensemble learning algorithm thatuses simple majority voting among $K$ learned hypotheses.Suppose that each hypothesis has error $epsilon$ and that the errorsmade by each hypothesis are independent of the others’. Calculate aformula for the error of the ensemble algorithm in terms of $K$and $epsilon$, and evaluate it for the cases where$K=5$, 10, and 20 and $epsilon={0.1}$, 0.2,and 0.4. If the independence assumption is removed, is it possible forthe ensemble error to be worse than $epsilon$?Exercise 24 Construct by hand a neural network that computes the xorfunction of two inputs. Make sure to specify what sort of units you areusing.Exercise 25 A simple perceptron cannot represent xor (or, generally,the parity function of its inputs). Describe what happens to the weightsof a four-input, hard-threshold perceptron, beginning with all weightsset to 0.1, as examples of the parity function arrive.Exercise 26 (linear-separability-exercise) Recall fromChapter concept-learning-chapter that there are$2^{2^n}$ distinct Boolean functions of $n$ inputs. How many ofthese are representable by a threshold perceptron?Exercise 27 Consider the following set of examples, each with six inputs and onetarget output:| | | | | | | | | | | | | | | || --- | --- | --- | --- | --- | --- || $textbf{x}_1$ | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 || $textbf{x}_2$ | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 || $textbf{x}_3$ | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 || $textbf{x}_4$ | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 || $textbf{x}_5$ | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 || $textbf{x}_6$ | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 || $textbf{T}$ | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |1. Run the perceptron learning rule on these data and show the final weights.2. Run the decision tree learning rule, and show the resulting decision tree.3. Comment on your results.Exercise 28 (perceptron-ML-gradient-exercise) Section logistic-regression-section(page logistic-regression-section) noted that the output of the logistic functioncould be interpreted as a probability $p$ assigned by themodel to the proposition that $f(textbf{x})1$; the probability that$f(textbf{x})0$ is therefore $1-p$. Write down the probability $p$as a function of $textbf{x}$ and calculate the derivative of $log p$ withrespect to each weight $w_i$. Repeat the process for $log (1-p)$. Thesecalculations give a learning rule for minimizing thenegative-log-likelihood loss function for a probabilistic hypothesis.Comment on any resemblance to other learning rules in the chapter.Exercise 29 (linear-nn-exercise) Suppose you had a neural network with linearactivation functions. That is, for each unit the output is some constant$c$ times the weighted sum of the inputs.1. Assume that the network has one hidden layer. For a given assignment to the weights $textbf{w}$, write down equations for the value of the units in the output layer as a function of $textbf{w}$ and the input layer $textbf{x}$, without any explicit mention of the output of the hidden layer. Show that there is a network with no hidden units that computes the same function.2. Repeat the calculation in part (a), but this time do it for a network with any number of hidden layers.3. Suppose a network with one hidden layer and linear activation functions has $n$ input and output nodes and $h$ hidden nodes. What effect does the transformation in part (a) to a network with no hidden layers have on the total number of weights? Discuss in particular the case $h ll n$.Exercise 30 Implement a data structure for layered, feed-forward neural networks,remembering to provide the information needed for both forwardevaluation and backward propagation. Using this data structure, write afunction NEURAL-NETWORK-OUTPUT that takes an example and a network and computes theappropriate output values.Exercise 31 Suppose that a training set contains only a single example, repeated 100times. In 80 of the 100 cases, the single output value is 1; in theother 20, it is 0. What will a back-propagation network predict for thisexample, assuming that it has been trained and reaches a global optimum?(Hint: to find the global optimum, differentiate theerror function and set it to zero.)Exercise 32 The neural network whose learning performance is measured inFigure restaurant-back-prop-figure has four hiddennodes. This number was chosen somewhat arbitrarily. Use across-validation method to find the best number of hidden nodes.Exercise 33 (embedding-separability-exercise) Consider the problem of separating$N$ data points into positive and negative examples using a linearseparator. Clearly, this can always be done for $N2$ pointson a line of dimension $d1$, regardless of how the points arelabeled or where they are located (unless the points are in the sameplace).1. Show that it can always be done for $N3$ points on a plane of dimension $d2$, unless they are collinear.2. Show that it cannot always be done for $N4$ points on a plane of dimension $d2$.3. Show that it can always be done for $N4$ points in a space of dimension $d3$, unless they are coplanar.4. Show that it cannot always be done for $N5$ points in a space of dimension $d3$.5. The ambitious student may wish to prove that $N$ points in general position (but not $N+1$) are linearly separable in a space of dimension $N-1$.Exercise 1 (dbsig-exercise) Show, by translating into conjunctive normal form andapplying resolution, that the conclusion drawn on page dbsig-pageconcerning Brazilians is sound.Exercise 2 For each of the following determinations, write down the logicalrepresentation and explain why the determination is true (if it is):1. Design and denomination determine the mass of a coin.2. For a given program, input determines output.3. Climate, food intake, exercise, and metabolism determine weight gain and loss.4. Baldness is determined by the baldness (or lack thereof) of one’s maternal grandfather. Exercise 3 For each of the following determinations, write down the logicalrepresentation and explain why the determination is true (if it is):1. Zip code determines the state (U.S.).2. Design and denomination determine the mass of a coin.3. Climate, food intake, exercise, and metabolism determine weight gain and loss.4. Baldness is determined by the baldness (or lack thereof) of one’s maternal grandfather.Exercise 4 Would a probabilistic version of determinations be useful? Suggest adefinition.Exercise 5 (ir-step-exercise) Fill in the missing values for the clauses $C_1$ or$C_2$ (or both) in the following sets of clauses, given that $C$ is theresolvent of $C_1$ and $C_2$:1. $C = {True} Rightarrow P(A,B)$, $C_1 = P(x,y) Rightarrow Q(x,y)$, $C_2 = ??$.2. $C = {True} Rightarrow P(A,B)$, $C_1 = ??$, $C_2 = ??$.3. $C = P(x,y) Rightarrow P(x,f(y))$, $C_1 = ??$, $C_2 = ??$.If there is more than one possible solution, provide one example of eachdifferent kind.Exercise 6 (prolog-ir-exercise) Suppose one writes a logic program that carriesout a resolution inference step. That is, let ${Resolve}(c_1,c_2,c)$succeed if $c$ is the result of resolving $c_1$ and $c_2$. Normally,${Resolve}$ would be used as part of a theorem prover by calling itwith $c_1$ and $c_2$ instantiated to particular clauses, therebygenerating the resolvent $c$. Now suppose instead that we call it with$c$ instantiated and $c_1$ and $c_2$ uninstantiated. Will this succeedin generating the appropriate results of an inverse resolution step?Would you need any special modifications to the logic programming systemfor this to work?Exercise 7 (foil-literals-exercise) Suppose that is considering adding a literalto a clause using a binary predicate $P$ and that previous literals(including the head of the clause) contain five different variables.1. How many functionally different literals can be generated? Two literals are functionally identical if they differ only in the names of the *new* variables that they contain.2. Can you find a general formula for the number of different literals with a predicate of arity $r$ when there are $n$ variables previously used?3. Why does not allow literals that contain no previously used variables?Exercise 8 Using the data from the family tree inFigure family2-figure, or a subset thereof, apply thealgorithm to learn a definition for the ${Ancestor}$ predicate.Exercise 1 (bayes-candy-exercise) The data used forFigure bayes-candy-figure on page bayes-candy-figure can beviewed as being generated by $h_5$. For each of the other fourhypotheses, generate a data set of length 100 and plot the correspondinggraphs for $P(h_i|d_1,ldots,d_N)$ and$P(D_{N+1}=lime|d_1,ldots,d_N)$. Comment onyour results.Exercise 2 Repeat Exercise bayes-candy-exercise, this timeplotting the values of$P(D_{N+1}=lime|h_{MAP})$ and$P(D_{N+1}=lime|h_{ML})$.Exercise 3 (candy-trade-exercise) Suppose that Ann’s utilities for cherry andlime candies are $c_A$ and $ell_A$, whereas Bob’s utilities are $c_B$and $ell_B$. (But once Ann has unwrapped a piece of candy, Bob won’tbuy it.) Presumably, if Bob likes lime candies much more than Ann, itwould be wise for Ann to sell her bag of candies once she issufficiently sure of its lime content. On the other hand, if Ann unwrapstoo many candies in the process, the bag will be worth less. Discuss theproblem of determining the optimal point at which to sell the bag.Determine the expected utility of the optimal procedure, given the priordistribution from Section statistical-learning-section.Exercise 4 Two statisticians go to the doctor and are both given the sameprognosis: A 40% chance that the problem is the deadly disease $A$, anda 60% chance of the fatal disease $B$. Fortunately, there are anti-$A$and anti-$B$ drugs that are inexpensive, 100% effective, and free ofside-effects. The statisticians have the choice of taking one drug,both, or neither. What will the first statistician (an avid Bayesian)do? How about the second statistician, who always uses the maximumlikelihood hypothesis?The doctor does some research and discovers that disease $B$ actuallycomes in two versions, dextro-$B$ and levo-$B$, which are equally likelyand equally treatable by the anti-$B$ drug. Now that there are threehypotheses, what will the two statisticians do?Exercise 5 (BNB-exercise) Explain how to apply the boosting method ofChapter concept-learning-chapter to naive Bayeslearning. Test the performance of the resulting algorithm on therestaurant learning problem.Exercise 6 (linear-regression-exercise) Consider $N$ data points $(x_j,y_j)$,where the $y_j$s are generated from the $x_j$s according to the linearGaussian model inEquation (linear-gaussian-likelihood-equation). Findthe values of $theta_1$, $theta_2$, and $sigma$ that maximize theconditional log likelihood of the data.Exercise 7 (noisy-OR-ML-exercise) Consider the noisy-OR model for fever describedin Section canonical-distribution-section. Explain howto apply maximum-likelihood learning to fit the parameters of such amodel to a set of complete data. (Hint: use the chainrule for partial derivatives.)Exercise 8 (beta-integration-exercise) This exercise investigates properties ofthe Beta distribution defined inEquation (beta-equation.1. By integrating over the range $[0,1]$, show that the normalization constant for the distribution $[a,b]$ is given by $alpha = Gamma(a+b)/Gamma(a)Gamma(b)$ where $Gamma(x)$ is the Gamma function, defined by $Gamma(x+1)xcdotGamma(x)$ and $Gamma(1)1$. (For integer $x$, $Gamma(x+1)x!$.)2. Show that the mean is $a/(a+b)$.3. Find the mode(s) (the most likely value(s) of $theta$).4. Describe the distribution $[epsilon,epsilon]$ for very small $epsilon$. What happens as such a distribution is updated?Exercise 9 (ML-parents-exercise) Consider an arbitrary Bayesian network, acomplete data set for that network, and the likelihood for the data setaccording to the network. Give a simple proof that the likelihood of thedata cannot decrease if we add a new link to the network and recomputethe maximum-likelihood parameter values.Exercise 10 Consider a single Boolean random variable $Y$ (the “classification”).Let the prior probability $P(Y=true)$ be $pi$. Let’s try tofind $pi$, given a training set $D=(y_1,ldots,y_N)$ with $N$independent samples of $Y$. Furthermore, suppose $p$ of the $N$ arepositive and $n$ of the $N$ are negative.1. Write down an expression for the likelihood of $D$ (i.e., the probability of seeing this particular sequence of examples, given a fixed value of $pi$) in terms of $pi$, $p$, and $n$.2. By differentiating the log likelihood $L$, find the value of $pi$ that maximizes the likelihood.3. Now suppose we add in $k$ Boolean random variables $X_1, X_2,ldots,X_k$ (the “attributes”) that describe each sample, and suppose we assume that the attributes are conditionally independent of each other given the goal $Y$. Draw the Bayes net corresponding to this assumption.4. Write down the likelihood for the data including the attributes, using the following additional notation: - $alpha_i$ is $P(X_i=true | Y=true)$. - $beta_i$ is $P(X_i=true | Y=false)$. - $p_i^+$ is the count of samples for which $X_i=true$ and $Y=true$. - $n_i^+$ is the count of samples for which $X_i=false$ and $Y=true$. - $p_i^-$ is the count of samples for which $X_i=true$ and $Y=false$. - $n_i^-$ is the count of samples for which $X_i=false$ and $Y=false$. [Hint: consider first the probability of seeing a single example with specified values for $X_1, X_2,ldots,X_k$ and $Y$.]5. By differentiating the log likelihood $L$, find the values of $alpha_i$ and $beta_i$ (in terms of the various counts) that maximize the likelihood and say in words what these values represent.6. Let $k = 2$, and consider a data set with 4 all four possible examples of thexor function. Compute the maximum likelihood estimates of $pi$, $alpha_1$, $alpha_2$, $beta_1$, and $beta_2$.7. Given these estimates of $pi$, $alpha_1$, $alpha_2$, $beta_1$, and $beta_2$, what are the posterior probabilities $P(Y=true | x_1,x_2)$ for each example?Exercise 11 Consider the application of EM to learn the parameters for the networkin Figure mixture-networks-figure(a), given the trueparameters in Equation (candy-true-equation).1. Explain why the EM algorithm would not work if there were just two attributes in the model rather than three.2. Show the calculations for the first iteration of EM starting from Equation (candy-64-equation).3. What happens if we start with all the parameters set to the same value $p$? (Hint: you may find it helpful to investigate this empirically before deriving the general result.)4. Write out an expression for the log likelihood of the tabulated candy data on page candy-counts-page in terms of the parameters, calculate the partial derivatives with respect to each parameter, and investigate the nature of the fixed point reached in part (c).Exercise 1 Implement a passive learning agent in a simple environment, such as the$4times 3$ world. For the case of an initially unknown environmentmodel, compare the learning performance of the direct utilityestimation, TD, and ADP algorithms. Do the comparison for the optimalpolicy and for several random policies. For which do the utilityestimates converge faster? What happens when the size of the environmentis increased? (Try environments with and without obstacles.)Exercise 2 Chapter complex-decisions-chapter defined aproper policy for an MDP as one that isguaranteed to reach a terminal state. Show that it is possible for apassive ADP agent to learn a transition model for which its policy $pi$is improper even if $pi$ is proper for the true MDP; with such models,the POLICY-EVALUATION step may fail if $gamma1$. Show that this problem cannotarise if POLICY-EVALUATION is applied to the learned model only at the end of a trial.Exercise 3 (prioritized-sweeping-exercise) Starting with the passive ADP agent,modify it to use an approximate ADP algorithm as discussed in the text.Do this in two steps:1. Implement a priority queue for adjustments to the utility estimates. Whenever a state is adjusted, all of its predecessors also become candidates for adjustment and should be added to the queue. The queue is initialized with the state from which the most recent transition took place. Allow only a fixed number of adjustments.2. Experiment with various heuristics for ordering the priority queue, examining their effect on learning rates and computation time.Exercise 4 The direct utility estimation method inSection passive-rl-section uses distinguished terminalstates to indicate the end of a trial. How could it be modified forenvironments with discounted rewards and no terminal states?Exercise 5 Write out the parameter update equations for TD learning with$$hat{U}(x,y) = theta_0 + theta_1 x + theta_2 y + theta_3,sqrt{(x-x_g)^2 + (y-y_g)^2} .$$Exercise 6 Adapt the vacuum world (Chapter agents-chapter forreinforcement learning by including rewards for squares being clean.Make the world observable by providing suitable percepts. Now experimentwith different reinforcement learning agents. Is function approximationnecessary for success? What sort of approximator works for thisapplication?Exercise 7 (approx-LMS-exercise) Implement an exploring reinforcement learningagent that uses direct utility estimation. Make two versions—one with atabular representation and one using the function approximator inEquation (4x3-linear-approx-equation). Compare theirperformance in three environments:1. The $4times 3$ world described in the chapter.2. A ${10}times {10}$ world with no obstacles and a +1 reward at (10,10).3. A ${10}times {10}$ world with no obstacles and a +1 reward at (5,5).Exercise 8 Devise suitable features for reinforcement learning in stochastic gridworlds (generalizations of the $4times 3$ world) that contain multipleobstacles and multiple terminal states with rewards of $+1$ or $-1$.Exercise 9 Extend the standard game-playing environment(Chapter game-playing-chapter) to incorporate a rewardsignal. Put two reinforcement learning agents into the environment (theymay, of course, share the agent program) and have them play against eachother. Apply the generalized TD update rule(Equation (generalized-td-equation)) to update theevaluation function. You might wish to start with a simple linearweighted evaluation function and a simple game, such as tic-tac-toe.Exercise 10 (10x10-exercise) Compute the true utility function and the best linearapproximation in $x$ and $y$ (as inEquation (4x3-linear-approx-equation)) for thefollowing environments:1. A ${10}times {10}$ world with a single $+1$ terminal state at (10,10).2. As in (a), but add a $-1$ terminal state at (10,1).3. As in (b), but add obstacles in 10 randomly selected squares.4. As in (b), but place a wall stretching from (5,2) to (5,9).5. As in (a), but with the terminal state at (5,5).The actions are deterministic moves in the four directions. In eachcase, compare the results using three-dimensional plots. For eachenvironment, propose additional features (besides $x$ and $y$) thatwould improve the approximation and show the results.Exercise 11 Implement the REINFORCE and PEGASUS algorithms and apply them to the $4times 3$ world,using a policy family of your own choosing. Comment on the results.Exercise 12 Investigate the application of reinforcement learning ideas to themodeling of human and animal behavior.Exercise 13 Is reinforcement learning an appropriate abstract model for evolution?What connection exists, if any, between hardwired reward signals andevolutionary fitness?Exercise 1 This exercise explores the quality of the $n$-gram model of language.Find or create a monolingual corpus of 100,000 words or more. Segment itinto words, and compute the frequency of each word. How many distinctwords are there? Also count frequencies of bigrams (two consecutivewords) and trigrams (three consecutive words). Now use those frequenciesto generate language: from the unigram, bigram, and trigram models, inturn, generate a 100-word text by making random choices according to thefrequency counts. Compare the three generated texts with actuallanguage. Finally, calculate the perplexity of each model.Exercise 2 Write a program to do **segmentation** ofwords without spaces. Given a string, such as the URL“thelongestlistofthelongeststuffatthelongestdomainnameatlonglast.com,”return a list of component words: [“the,” “longest,” “list,”$ldots$]. This task is useful for parsing URLs, for spellingcorrection when words runtogether, and for languages such as Chinesethat do not have spaces between words. It can be solved with a unigramor bigram word model and a dynamic programming algorithm similar to theViterbi algorithm.Exercise 3 Zipf’s law of word distribution states the following:Take a large corpus of text, count the frequency of every word in thecorpus, and then rank these frequencies in decreasing order. Let $f_{I}$be the $I$th largest frequency in this list; that is, $f_{1}$ is thefrequency of the most common word (usually “the”), $f_{2}$ is thefrequency of the second most common word, and so on. Zipf’s law statesthat $f_{I}$ is approximately equal to $alpha / I$ for some constant$alpha$. The law tends to be highly accurate except for very small andvery large values of $I$.Exercise 4 Choose a corpus of at least 20,000 words of online text, and verifyZipf’s law experimentally. Define an error measure and find the value of$alpha$ where Zipf’s law best matches your experimental data. Create alog–log graph plotting $f_{I}$ vs. $I$ and $alpha/I$ vs. $I$. (On alog–log graph, the function $alpha/I$ is a straight line.) In carryingout the experiment, be sure to eliminate any formatting tokens (e.g.,HTML tags) and normalize upper and lower case.Exercise 5 (Adapted from Jurafsky+Martin:2000.) In this exercise you will develop a classifier forauthorship: given a text, the classifier predicts which of two candidateauthors wrote the text. Obtain samples of text from two differentauthors. Separate them into training and test sets. Now train a languagemodel on the training set. You can choose what features to use;$n$-grams of words or letters are the easiest, but you can addadditional features that you think may help. Then compute theprobability of the text under each language model and chose the mostprobable model. Assess the accuracy of this technique. How does accuracychange as you alter the set of features? This subfield of linguistics iscalled stylometry; its successes include the identification of the author of thedisputed Federalist Papers Mosteller+Wallace:1964 andsome disputed works of Shakespeare Hope:1994. Khmelev+Tweedie:2001 produce good results witha simple letter bigram model.Exercise 6 This exercise concerns the classification of spam email.Create a corpus of spam email and one of non-spam mail. Examine eachcorpus and decide what features appear to be useful for classification:unigram words? bigrams? message length, sender, time of arrival? Thentrain a classification algorithm (decision tree, naive Bayes, SVM,logistic regression, or some other algorithm of your choosing) on atraining set and report its accuracy on a test set.Exercise 7 Create a test set of ten queries, and pose them to three major Websearch engines. Evaluate each one for precision at 1, 3, and 10documents. Can you explain the differences between engines?Exercise 8 Try to ascertain which of the search engines from the previous exerciseare using case folding, stemming, synonyms, and spelling correction.Exercise 9 Estimate how much storage space is necessary for the index to a 100billion-page corpus of Web pages. Show the assumptions you made.Exercise 10 Write a regular expression or a short program to extract company names.Test it on a corpus of business news articles. Report your recall andprecision.Exercise 11 Consider the problem of trying to evaluate the quality of an IR systemthat returns a ranked list of answers (like most Web search engines).The appropriate measure of quality depends on the presumed model of whatthe searcher is trying to achieve, and what strategy she employs. Foreach of the following models, propose a corresponding numeric measure.1. The searcher will look at the first twenty answers returned, with the objective of getting as much relevant information as possible.2. The searcher needs only one relevant document, and will go down the list until she finds the first one.3. The searcher has a fairly narrow query and is able to examine all the answers retrieved. She wants to be sure that she has seen everything in the document collection that is relevant to her query. (E.g., a lawyer wants to be sure that she has found all relevant precedents, and is willing to spend considerable resources on that.)4. The searcher needs just one document relevant to the query, and can afford to pay a research assistant for an hour’s work looking through the results. The assistant can look through 100 retrieved documents in an hour. The assistant will charge the searcher for the full hour regardless of whether he finds it immediately or at the end of the hour.5. The searcher will look through all the answers. Examining a document has cost $ A; finding a relevant document has value $ B; failing to find a relevant document has cost $ C for each relevant document not found.6. The searcher wants to collect as many relevant documents as possible, but needs steady encouragement. She looks through the documents in order. If the documents she has looked at so far are mostly good, she will continue; otherwise, she will stop.Exercise 1 (washing-clothes-exercise) Read the following text once forunderstanding, and remember as much of it as you can. There will be atest later.&gt; The procedure is actually quite simple. First you arrange things intodifferent groups. Of course, one pile may be sufficient depending on howmuch there is to do. If you have to go somewhere else due to lack offacilities that is the next step, otherwise you are pretty well set. Itis important not to overdo things. That is, it is better to do too fewthings at once than too many. In the short run this may not seemimportant but complications can easily arise. A mistake is expensive aswell. At first the whole procedure will seem complicated. Soon, however,it will become just another facet of life. It is difficult to foreseeany end to the necessity for this task in the immediate future, but thenone can never tell. After the procedure is completed one arranges thematerial into different groups again. Then they can be put into theirappropriate places. Eventually they will be used once more and the wholecycle will have to be repeated. However, this is part of life.Exercise 2 An HMM grammar is essentially a standard HMM whose statevariable is $N$ (nonterminal, with values such as $Det$, $Adjective$,$Noun$ and so on) and whose evidence variable is $W$ (word, with valuessuch as $is$, $duck$, and so on). The HMM model includes a prior${textbf{P}}(N_0)$, a transition model${textbf{P}}(N_{t+1}|N_t)$, and a sensor model${textbf{P}}(W_t|N_t)$. Show that every HMM grammar can bewritten as a PCFG. [Hint: start by thinking about how the HMM prior canbe represented by PCFG rules for the sentence symbol. You may find ithelpful to illustrate for the particular HMM with values $A$, $B$ for$N$ and values $x$, $y$ for $W$.]Exercise 3 Consider the following PCFG for simple verb phrases:&gt; 0.1: VP $rightarrow$ Verb&gt; 0.2: VP $rightarrow$ Copula Adjective&gt; 0.5: VP $rightarrow$ Verb the Noun&gt; 0.2: VP $rightarrow$ VP Adverb&gt; 0.5: Verb $rightarrow$ is&gt; 0.5: Verb $rightarrow$ shoots&gt; 0.8: Copula $rightarrow$ is&gt; 0.2: Copula $rightarrow$ seems&gt; 0.5: Adjective $rightarrow$ unwell&gt; 0.5: Adjective $rightarrow$ well&gt; 0.5: Adverb $rightarrow$ well&gt; 0.5: Adverb $rightarrow$ badly&gt; 0.6: Noun $rightarrow$ duck&gt; 0.4: Noun $rightarrow$ well1. Which of the following have a nonzero probability as a VP? (i) shoots the duck well well well(ii) seems the well well(iii) shoots the unwell well badly2. What is the probability of generating “is well well”?3. What types of ambiguity are exhibited by the phrase in (b)?4. Given any PCFG, is it possible to calculate the probability that the PCFG generates a string of exactly 10 words?Exercise 4 Consider the following simple PCFG for noun phrases:&gt; 0.6: NP $rightarrow$ Det AdjString Noun&gt; 0.4: NP $rightarrow$ Det NounNounCompound&gt; 0.5: AdjString $rightarrow$ Adj AdjString&gt; 0.5: AdjString $rightarrow$ $Lambda$&gt; 1.0: NounNounCompound $rightarrow$ Noun&gt; 0.8: Det $rightarrow$ the&gt; 0.2: Det $rightarrow$ a&gt; 0.5: Adj $rightarrow$ small&gt; 0.5: Adj $rightarrow$ green&gt; 0.6: Noun $rightarrow$ village&gt; 0.4: Noun $rightarrow$ greenwhere $Lambda$ denotes the empty string.1. What is the longest NP that can be generated by this grammar? (i) three words(ii) four words(iii) infinitely many words2. Which of the following have a nonzero probability of being generated as complete NPs? (i) a small green village(ii) a green green green(iii) a small village green3. What is the probability of generating “the green green”?4. What types of ambiguity are exhibited by the phrase in (c)?5. Given any PCFG and any finite word sequence, is it possible to calculate the probability that the sequence was generated by the PCFG?Exercise 5 Outline the major differences between Java (or any other computerlanguage with which you are familiar) and English, commenting on the“understanding” problem in each case. Think about such things asgrammar, syntax, semantics, pragmatics, compositionality,context-dependence, lexical ambiguity, syntactic ambiguity, referencefinding (including pronouns), background knowledge, and what it means to“understand” in the first place.Exercise 6 This exercise concerns grammars for very simple languages.1. Write a context-free grammar for the language $a^n b^n$.2. Write a context-free grammar for the palindrome language: the set of all strings whose second half is the reverse of the first half.3. Write a context-sensitive grammar for the duplicate language: the set of all strings whose second half is the same as the first half.Exercise 7 Consider the sentence “Someone walked slowly to the supermarket” and alexicon consisting of the following words:$Pronoun rightarrow textbf{someone} quad Verb rightarrow textbf{walked}$$Adv rightarrow textbf{slowly} quad Prep rightarrow textbf{to}$$Article rightarrow textbf{the} quad Noun rightarrow textbf{supermarket}$Which of the following three grammars, combined with the lexicon,generates the given sentence? Show the corresponding parse tree(s).| $quadquadquadquad (A):quadquadquadquad$ | $quadquadquadquad(B):quadquadquadquad$ | $quadquadquadquad(C):quadquadquadquad$ || --- | --- | --- || $Srightarrow NPspace VP$ | $Srightarrow NPspace VP$ | $Srightarrow NPspace VP$ || $NPrightarrow Pronoun$ | $NPrightarrow Pronoun$ | $NPrightarrow Pronoun$ || $NPrightarrow Articlespace Noun $ | $NPrightarrow Noun$ | $NPrightarrow Articlespace NP$ || $VPrightarrow VPspace PP$ | $NPrightarrow Articlespace NP$ | $VPrightarrow Verbspace Adv$ || $VPrightarrow VPspace Advspace Adv$ | $VPrightarrow Verbspace Vmod$ | $Advrightarrow Advspace Adv$ || $VPrightarrow Verb$ | $Vmodrightarrow Advspace Vmod$ | $Advrightarrow PP$ || $PPrightarrow Prepspace NP$ | $Vmodrightarrow Adv$ | $PPrightarrow Prepspace NP$ || $NPrightarrow Noun$ | $Advrightarrow PP$ | $NPrightarrow Noun$ || $quad$ | $PPrightarrow Prepspace NP$ | $quad$ |For each of the preceding three grammars, write down three sentences ofEnglish and three sentences of non-English generated by the grammar.Each sentence should be significantly different, should be at least sixwords long, and should include some new lexical entries (which youshould define). Suggest ways to improve each grammar to avoid generatingthe non-English sentences.Exercise 8 Collect some examples of time expressions, such as “two o’clock,”“midnight,” and “12:46.” Also think up some examples that areungrammatical, such as “thirteen o’clock” or “half past two fifteen.”Write a grammar for the time language.Exercise 9 Some linguists have argued as follows: Children learning a language hear only positive examples of the language and no negative examples. Therefore, the hypothesis that “every possible sentence is in the language” is consistent with all the observed examples. Moreover, this is the simplest consistent hypothesis. Furthermore, all grammars for languages that are supersets of the true language are also consistent with the observed data. Yet children do induce (more or less) the right grammar. It follows that they begin with very strong innate grammatical constraints that rule out all of these more general hypotheses a priori.Comment on the weak point(s) in this argument from a statisticallearning viewpoint.Exercise 10 (chomsky-form-exercise) In this exercise you will transform $large varepsilon_0$ intoChomsky Normal Form (CNF). There are five steps: (a) Add a new startsymbol, (b) Eliminate $epsilon$ rules, (c) Eliminate multiple words onright-hand sides, (d) Eliminate rules of the form(${it X}$$rightarrow$${it Y}$),(e) Convert long right-hand sides into binary rules.1. The start symbol, $S$, can occur only on the left-hand side in CNF. Replace ${it S}$ everywhere by a new symbol ${it S'}$ and add a rule of the form ${it S}$ $rightarrow$${it S'}$.2. The empty string, $epsilon$ cannot appear on the right-hand side in CNF. $large varepsilon_0$ does not have any rules with $epsilon$, so this is not an issue.3. A word can appear on the right-hand side in a rule only of the form (${it X}$ $rightarrow$*word*). Replace each rule of the form (${it X}$ $rightarrow$…*word* …) with (${it X}$ $rightarrow$…${it W'}$ …) and (${it W'}$ $rightarrow$*word*), using a new symbol ${it W'}$.4. A rule (${it X}$ $rightarrow$${it Y}$) is not allowed in CNF; it must be (${it X}$ $rightarrow$${it Y}$ ${it Z}$) or (${it X}$ $rightarrow$*word*). Replace each rule of the form (${it X}$ $rightarrow$${it Y}$) with a set of rules of the form (${it X}$ $rightarrow$…), one for each rule (${it Y}$ $rightarrow$…), where (…) indicates one or more symbols.5. Replace each rule of the form (${it X}$ $rightarrow$${it Y}$ ${it Z}$ …) with two rules, (${it X}$ $rightarrow$${it Y}$ ${it Z'}$) and (${it Z'}$ $rightarrow$${it Z}$ …), where ${it Z'}$ is a new symbol.Show each step of the process and the final set of rules.Exercise 11 Consider the following toy grammar:&gt; $S rightarrow NPspace VP$&gt; $NP rightarrow Noun$&gt; $NP rightarrow NPspace andspace NP$&gt; $NP rightarrow NPspace PP$&gt; $VP rightarrow Verb$&gt; $VP rightarrow VPspace and space VP$&gt; $VP rightarrow VPspace PP$&gt; $PP rightarrow Prepspace NP$&gt; $Noun rightarrow Sallyspace; poolsspace; streamsspace; swims$&gt; $Prep rightarrow in$&gt; $Verb rightarrow poolsspace; streamsspace; swims$1. Show all the parse trees in this grammar for the sentence “Sally swims in streams and pools.”2. Show all the table entries that would be made by a (non-probabalistic) CYK parser on this sentence.Exercise 12(exercise-subj-verb-agree) Using DCG notation, write a grammar for alanguage that is just like $large varepsilon_1$, except that it enforces agreement betweenthe subject and verb of a sentence and thus does not generateungrammatical sentences such as “I smells the wumpus.”Exercise 13 Consider the following PCFG:&gt; $S rightarrow NP space VP[1.0] $&gt; $NP rightarrow textit{Noun}[0.6] space|space textit{Pronoun}[0.4] $&gt; $VP rightarrow textit{Verb} space NP[0.8] space|space textit{Modal}space textit{Verb}[0.2]$&gt; $textit{Noun} rightarrow textbf{can}[0.1] space|space textbf{fish}[0.3] space|space ...$&gt; $textit{Pronoun} rightarrow textbf{I}[0.4] space|space ...$&gt; $textit{Verb} rightarrow textbf{can}[0.01] space|space textbf{fish}[0.1] space|space ...$&gt; $textit{Modal} rightarrow textbf{can}[0.3] space|space ...$The sentence “I can fish” has two parse trees with this grammar. Showthe two trees, their prior probabilities, and their conditionalprobabilities, given the sentence.Exercise 14 An augmented context-free grammar can represent languages that a regularcontext-free grammar cannot. Show an augmented context-free grammar forthe language $a^nb^nc^n$. The allowable values for augmentationvariables are 1 and $SUCCESSOR(n)$, where $n$ is a value. The rule for a sentencein this language is$$S(n) rightarrow}}A(n) B(n) C(n) .$$Show the rule(s) for each of ${it A}$,${it B}$, and ${it C}$.Exercise 15 Augment the $large varepsilon_1$ grammar so that it handles article–noun agreement. That is,make sure that “agents” and “an agent” are ${it NP}$s, but“agent” and “an agents” are not.Exercise 16 Consider the following sentence (from The New York Times,July 28, 2008):&gt; Banks struggling to recover from multibillion-dollar loans on real&gt; estate are curtailing loans to American businesses, depriving even&gt; healthy companies of money for expansion and hiring.1. Which of the words in this sentence are lexically ambiguous?2. Find two cases of syntactic ambiguity in this sentence (there are more than two.)3. Give an instance of metaphor in this sentence.4. Can you find semantic ambiguity?Exercise 17 (washing-clothes2-exercise) Without looking back atExercise washing-clothes-exercise, answer the followingquestions:1. What are the four steps that are mentioned?2. What step is left out?3. What is “the material” that is mentioned in the text?4. What kind of mistake would be expensive?5. Is it better to do too few things or too many? Why?Exercise 18 Select five sentences and submit them to an online translation service.Translate them from English to another language and back to English.Rate the resulting sentences for grammaticality and preservation ofmeaning. Repeat the process; does the second round of iteration giveworse results or the same results? Does the choice of intermediatelanguage make a difference to the quality of the results? If you know aforeign language, look at the translation of one paragraph into thatlanguage. Count and describe the errors made, and conjecture why theseerrors were made.Exercise 19 The $D_i$ values for the sentence inFigure mt-alignment-figure sum to 0. Will that be trueof every translation pair? Prove it or give a counterexample.Exercise 20 (Adapted from [Knight:1999].) Our translation model assumes that, after the phrasetranslation model selects phrases and the distortion model permutesthem, the language model can unscramble the permutation. This exerciseinvestigates how sensible that assumption is. Try to unscramble theseproposed lists of phrases into the correct order:1. have, programming, a, seen, never, I, language, better2. loves, john, mary3. is the, communication, exchange of, intentional, information brought, by, about, the production, perception of, and signs, from, drawn, a, of, system, signs, conventional, shared4. created, that, we hold these, to be, all men, truths, are, equal, self-evidentWhich ones could you do? What type of knowledge did you draw upon? Traina bigram model from a training corpus, and use it to find thehighest-probability permutation of some sentences from a test corpus.Report on the accuracy of this model.Exercise 21 Calculate the most probable path through the HMM inFigure sr-hmm-figure for the output sequence$[C_1,C_2,C_3,C_4,C_4,C_6,C_7]$. Also give its probability.Exercise 22 We forgot to mention that the text inExercise washing-clothes-exercise is entitled “WashingClothes.” Reread the text and answer the questions inExercise washing-clothes2-exercise. Did you do betterthis time? Bransford and Johnson [Bransford+Johnson:1973] used thistext in a controlled experiment and found that the title helpedsignificantly. What does this tell you about how language and memoryworks?Exercise 1 In the shadow of a tree with a dense, leafy canopy, one sees a number oflight spots. Surprisingly, they all appear to be circular. Why? Afterall, the gaps between the leaves through which the sun shines are notlikely to be circular.Exercise 2 Consider a picture of a white sphere floating in front of a blackbackdrop. The image curve separating white pixels from black pixels issometimes called the “outline” of the sphere. Show that the outline of asphere, viewed in a perspective camera, can be an ellipse. Why dospheres not look like ellipses to you?Exercise 3 Consider an infinitely long cylinder of radius $r$ oriented with itsaxis along the $y$-axis. The cylinder has a Lambertian surface and isviewed by a camera along the positive $z$-axis. What will you expect tosee in the image if the cylinder is illuminated by a point source atinfinity located on the positive $x$-axis? Draw the contours of constantbrightness in the projected image. Are the contours of equal brightnessuniformly spaced?Exercise 4 Edges in an image can correspond to a variety of events in a scene.Consider Figure illuminationfigure(page illuminationfigure, and assume that it is a picture of a realthree-dimensional scene. Identify ten different brightness edges in theimage, and for each, state whether it corresponds to a discontinuity in(a) depth, (b) surface orientation, (c) reflectance, or (d)illumination.Exercise 5 A stereoscopic system is being contemplated for terrain mapping. It willconsist of two CCD cameras, each having ${512}times {512}$ pixels on a10 cm $times$ 10 cm square sensor. The lenses to be used have a focallength of 16 cm, with the focus fixed at infinity. For correspondingpoints ($u_1,v_1$) in the left image and ($u_2,v_2$) in the right image,$v_1=v_2$ because the $x$-axes in the two image planes are parallel tothe epipolar lines—the lines from the object to the camera. The opticalaxes of the two cameras are parallel. The baseline between the camerasis 1 meter.1. If the nearest distance to be measured is 16 meters, what is the largest disparity that will occur (in pixels)?2. What is the distance resolution at 16 meters, due to the pixel spacing?3. What distance corresponds to a disparity of one pixel?Exercise 6 Which of the following are true, and which are false?1. Finding corresponding points in stereo images is the easiest phase of the stereo depth-finding process.2. Shape-from-texture can be done by projecting a grid of light-stripes onto the scene.3. Lines with equal lengths in the scene always project to equal lengths in the image.4. Straight lines in the image necessarily correspond to straight lines in the scene.Exercise 7 Which of the following are true, and which are false?1. Finding corresponding points in stereo images is the easiest phase of the stereo depth-finding process.2. In stereo views of the same scene, greater accuracy is obtained in the depth calculations if the two camera positions are farther apart.3. Lines with equal lengths in the scene always project to equal lengths in the image.4. Straight lines in the image necessarily correspond to straight lines in the scene. Top view of a two-camera vision system observing a bottle with a wall behind it. Exercise 8 (Courtesy of Pietro Perona.) Figure bottle-figure showstwo cameras at X and Y observing a scene. Draw the image seen at eachcamera, assuming that all named points are in the same horizontal plane.What can be concluded from these two images about the relative distancesof points A, B, C, D, and E from the camera baseline, and on what basis?Exercise 1 (mcl-biasdness-exercise) Monte Carlo localization isbiased for any finite sample size—i.e., the expectedvalue of the location computed by the algorithm differs from the trueexpected value—because of the way particle filtering works. In thisquestion, you are asked to quantify this bias.To simplify, consider a world with four possible robot locations:$X={x_,x_,x_,x_}$. Initially, wedraw $Ngeq $ samples uniformly from among those locations. Asusual, it is perfectly acceptable if more than one sample is generatedfor any of the locations $X$. Let $Z$ be a Boolean sensor variablecharacterized by the following conditional probabilities:$$begin{aligned}P(zmid x_) &amp;=&amp; } qquadqquad P(lnot zmid x_);;=;;} P(zmid x_) &amp;=&amp; } qquadqquad P(lnot zmid x_);;=;;} P(zmid x_) &amp;=&amp; } qquadqquad P(lnot zmid x_);;=;;} P(zmid x_) &amp;=&amp; } qquadqquad P(lnot zmid x_);;=;;} .end{aligned}$$begin{table}[]begin{tabular}{ll}P(ztextbackslash{}mid x_{{textbackslash{}rm 1}}) &amp;=&amp; {{textbackslash{}rm {0.8}}} &amp; 1 1 &amp; 1 1 &amp; 1 1 &amp; 1end{tabular}end{table}MCL uses these probabilities to generate particle weights, which aresubsequently normalized and used in the resampling process. Forsimplicity, let us assume we generate only one new sample in theresampling process, regardless of $N$. This sample might correspond toany of the four locations in $X$. Thus, the sampling process defines aprobability distribution over $X$.1. What is the resulting probability distribution over $X$ for this new sample? Answer this question separately for $N=,ldots,}$, and for $N=infty$.2. The difference between two probability distributions $P$ and $Q$ can be measured by the KL divergence, which is defined as $${KL}(P,Q) = sum_i P(x_i)logfrac{P(x_i)}{Q(x_i)} .$$ What are the KL divergences between the distributions in (a) and the true posterior?3. What modification of the problem formulation (not the algorithm!) would guarantee that the specific estimator above is unbiased even for finite values of $N$? Provide at least two such modifications (each of which should be sufficient).Exercise 2 (mcl-implement-exercise)Implement Monte Carlo localization for asimulated robot with range sensors. A grid map and range data areavailable from the code repository ataima.cs.berkeley.edu. You should demonstratesuccessful global localization of the robot. A Robot manipulator in two of its possible configurations.Exercise 3 (AB-manipulator-ex)Consider a robot with two simple manipulators, asshown in figure figRobot2. Manipulator A is a square block of side 2which can slide back and on a rod that runs along the x-axis fromx=$-$10 to x=10. Manipulator B is a square block of side 2 which canslide back and on a rod that runs along the y-axis from y=-10 to y=10.The rods lie outside the plane of manipulation, so the rods do notinterfere with the movement of the blocks. A configuration is then apair ${langle}x,y{rangle}$ where $x$ is the x-coordinate of the centerof manipulator A and where $y$ is the y-coordinate of the center ofmanipulator B. Draw the configuration space for this robot, indicatingthe permitted and excluded zones.Exercise 4 Suppose that you are working with the robot inExercise AB-manipulator-ex and you are given theproblem of finding a path from the starting configuration offigure figRobot2 to the ending configuration. Consider a potentialfunction $$D(A, {Goal})^2 + D(B, {Goal})^2 + frac{1}{D(A, B)^2}$$where $D(A,B)$ is the distance between the closest points of A and B.1. Show that hill climbing in this potential field will get stuck in a local minimum.2. Describe a potential field where hill climbing will solve this particular problem. You need not work out the exact numerical coefficients needed, just the general form of the solution. (Hint: Add a term that “rewards" the hill climber for moving A out of B’s way, even in a case like this where this does not reduce the distance from A to B in the above sense.)Exercise 5 (inverse-kinematics-exercise) Consider the robot arm shown inFigure FigArm1. Assume that the robot’s base element is60cm long and that its upper arm and forearm are each 40cm long. Asargued on page inverse-kinematics-not-unique, the inverse kinematics of a robot is oftennot unique. State an explicit closed-form solution of the inversekinematics for this arm. Under what exact conditions is the solutionunique?Exercise 6 (inverse-kinematics-exercise) Consider the robot arm shown inFigure FigArm1. Assume that the robot’s base element is70cm long and that its upper arm and forearm are each 50cm long. Asargued on page inverse-kinematics-not-unique, the inverse kinematics of a robot is oftennot unique. State an explicit closed-form solution of the inversekinematics for this arm. Under what exact conditions is the solutionunique?Exercise 7 (voronoi-exercise) Implement an algorithm for calculating the Voronoidiagram of an arbitrary 2D environment, described by an $ntimes n$Boolean array. Illustrate your algorithm by plotting the Voronoi diagramfor 10 interesting maps. What is the complexity of your algorithm?Exercise 8 (confspace-exercise) This exercise explores the relationship betweenworkspace and configuration space using the examples shown inFigure FigEx2.1. Consider the robot configurations shown in Figure FigEx2(a) through (c), ignoring the obstacle shown in each of the diagrams. Draw the corresponding arm configurations in configuration space. (Hint: Each arm configuration maps to a single point in configuration space, as illustrated in Figure FigArm1(b).)2. Draw the configuration space for each of the workspace diagrams in Figure FigEx2(a)–(c). (Hint: The configuration spaces share with the one shown in Figure FigEx2(a) the region that corresponds to self-collision, but differences arise from the lack of enclosing obstacles and the different locations of the obstacles in these individual figures.)3. For each of the black dots in Figure FigEx2(e)–(f), draw the corresponding configurations of the robot arm in workspace. Please ignore the shaded regions in this exercise.4. The configuration spaces shown in Figure FigEx2(e)–(f) have all been generated by a single workspace obstacle (dark shading), plus the constraints arising from the self-collision constraint (light shading). Draw, for each diagram, the workspace obstacle that corresponds to the darkly shaded area.5. Figure FigEx2(d) illustrates that a single planar obstacle can decompose the workspace into two disconnected regions. What is the maximum number of disconnected regions that can be created by inserting a planar obstacle into an obstacle-free, connected workspace, for a 2DOF robot? Give an example, and argue why no larger number of disconnected regions can be created. How about a non-planar obstacle? (a) (b) (c) (d) (e) (f) Exercise 9 Consider a mobile robot moving on a horizontal surface. Suppose that therobot can execute two kinds of motions:- Rolling forward a specified distance.- Rotating in place through a specified angle.The state of such a robot can be characterized in terms of threeparameters ${langle}x,y,phi$, the x-coordinate and y-coordinate of therobot (more precisely, of its center of rotation) and the robot’sorientation expressed as the angle from the positive x direction. Theaction “$Roll(D)$” has the effect of changing state ${langle}x,y,phi$to ${langle}x+D cos(phi), y+D sin(phi), phi {rangle}$, and theaction $Rotate(theta)$ has the effect of changing state${langle}x,y,phi {rangle}$ to${langle}x,y, phi + theta {rangle}$.1. Suppose that the robot is initially at ${langle}0,0,0 {rangle}$ and then executes the actions $Rotate(60^{circ})$, $Roll(1)$, $Rotate(25^{circ})$, $Roll(2)$. What is the final state of the robot?2. Now suppose that the robot has imperfect control of its own rotation, and that, if it attempts to rotate by $theta$, it may actually rotate by any angle between $theta-10^{circ}$ and $theta+10^{circ}$. In that case, if the robot attempts to carry out the sequence of actions in (A), there is a range of possible ending states. What are the minimal and maximal values of the x-coordinate, the y-coordinate and the orientation in the final state?3. Let us modify the model in (B) to a probabilistic model in which, when the robot attempts to rotate by $theta$, its actual angle of rotation follows a Gaussian distribution with mean $theta$ and standard deviation $10^{circ}$. Suppose that the robot executes the actions $Rotate(90^{circ})$, $Roll(1)$. Give a simple argument that (a) the expected value of the location at the end is not equal to the result of rotating exactly $90^{circ}$ and then rolling forward 1 unit, and (b) that the distribution of locations at the end does not follow a Gaussian. (Do not attempt to calculate the true mean or the true distribution.) The point of this exercise is that rotational uncertainty quickly gives rise to a lot of positional uncertainty and that dealing with rotational uncertainty is painful, whether uncertainty is treated in terms of hard intervals or probabilistically, due to the fact that the relation between orientation and position is both non-linear and non-monotonic. Simplified robot in a maze. See Exercise robot-exploration-exerciseExercise 10 (robot-exploration-exercise) Consider the simplified robot shown inFigure FigEx3. Suppose the robot’s Cartesiancoordinates are known at all times, as are those of its goal location.However, the locations of the obstacles are unknown. The robot can senseobstacles in its immediate proximity, as illustrated in this figure. Forsimplicity, let us assume the robot’s motion is noise-free, and thestate space is discrete. Figure FigEx3 is only oneexample; in this exercise you are required to address all possible gridworlds with a valid path from the start to the goal location.1. Design a deliberate controller that guarantees that the robot always reaches its goal location if at all possible. The deliberate controller can memorize measurements in the form of a map that is being acquired as the robot moves. Between individual moves, it may spend arbitrary time deliberating.2. Now design a reactive controller for the same task. This controller may not memorize past sensor measurements. (It may not build a map!) Instead, it has to make all decisions based on the current measurement, which includes knowledge of its own location and that of the goal. The time to make a decision must be independent of the environment size or the number of past time steps. What is the maximum number of steps that it may take for your robot to arrive at the goal?3. How will your controllers from (a) and (b) perform if any of the following six conditions apply: continuous state space, noise in perception, noise in motion, noise in both perception and motion, unknown location of the goal (the goal can be detected only when within sensor range), or moving obstacles. For each condition and each controller, give an example of a situation where the robot fails (or explain why it cannot fail).Exercise 11 (subsumption-exercise) In Figure Fig5(b) onpage Fig5, we encountered an augmented finite state machine forthe control of a single leg of a hexapod robot. In this exercise, theaim is to design an AFSM that, when combined with six copies of theindividual leg controllers, results in efficient, stable locomotion. Forthis purpose, you have to augment the individual leg controller to passmessages to your new AFSM and to wait until other messages arrive. Arguewhy your controller is efficient, in that it does not unnecessarilywaste energy (e.g., by sliding legs), and in that it propels the robotat reasonably high speeds. Prove that your controller satisfies thedynamic stability condition given on page polygon-stability-condition-page.Exercise 12 (human-robot-exercise)(This exercise was first devised by MichaelGenesereth and Nils Nilsson. It works for first graders through graduatestudents.) Humans are so adept at basic household tasks that they oftenforget how complex these tasks are. In this exercise you will discoverthe complexity and recapitulate the last 30 years of developments inrobotics. Consider the task of building an arch out of three blocks.Simulate a robot with four humans as follows:Brain. The Brain direct the hands in the execution of aplan to achieve the goal. The Brain receives input from the Eyes, butcannot see the scene directly. The brain is the only onewho knows what the goal is.Eyes. The Eyes report a brief description of the sceneto the Brain: “There is a red box standing on top of a green box, whichis on its side” Eyes can also answer questions from the Brain such as,“Is there a gap between the Left Hand and the red box?” If you have avideo camera, point it at the scene and allow the eyes to look at theviewfinder of the video camera, but not directly at the scene.Left hand and right hand. One personplays each Hand. The two Hands stand next to each other, each wearing anoven mitt on one hand, Hands execute only simple commands from theBrain—for example, “Left Hand, move two inches forward.” They cannotexecute commands other than motions; for example, they cannot becommanded to “Pick up the box.” The Hands must beblindfolded. The only sensory capability they have is theability to tell when their path is blocked by an immovable obstacle suchas a table or the other Hand. In such cases, they can beep to inform theBrain of the difficulty. Exercise 1 Go through Turing’s list of alleged“disabilities” of machines, identifying which have been achieved, whichare achievable in principle by a program, and which are stillproblematic because they require conscious mental states. Exercise 2 Find and analyze an account in the popular media of one or more of thearguments to the effect that AI is impossible. Exercise 3 Attempt to write definitions of the terms “intelligence,” “thinking,”and “consciousness.” Suggest some possible objections to yourdefinitions. Exercise 4 Does a refutation of the Chinese room argument necessarily prove thatappropriately programmed computers have mental states? Does anacceptance of the argument necessarily mean that computers cannot havemental states? Exercise 5 (brain-prosthesis-exercise) In the brain replacement argument, it isimportant to be able to restore the subject’s brain to normal, such thatits external behavior is as it would have been if the operation had nottaken place. Can the skeptic reasonably object that this would requireupdating those neurophysiological properties of the neurons relating toconscious experience, as distinct from those involved in the functionalbehavior of the neurons? Exercise 6 Suppose that a Prolog program containing many clauses about the rules ofBritish citizenship is compiled and run on an ordinary computer. Analyzethe “brain states” of the computer under wide and narrow content. Exercise 7 Alan Perlis [Perlis:1982] wrote, “A year spent in artificialintelligence is enough to make one believe in God”. He also wrote, in aletter to Philip Davis, that one of the central dreams of computerscience is that “through the performance of computers and their programswe will remove all doubt that there is only a chemical distinctionbetween the living and nonliving world.” To what extent does theprogress made so far in artificial intelligence shed light on theseissues? Suppose that at some future date, the AI endeavor has beencompletely successful; that is, we have build intelligent agents capableof carrying out any human cognitive task at human levels of ability. Towhat extent would that shed light on these issues? Exercise 8 Compare the social impact of artificial intelligence in the last fiftyyears with the social impact of the introduction of electric appliancesand the internal combustion engine in the fifty years between 1890 and1940. Exercise 9 I. J. Good claims that intelligence is the most important quality, andthat building ultraintelligent machines will change everything. Asentient cheetah counters that “Actually speed is more important; if wecould build ultrafast machines, that would change everything,” and asentient elephant claims “You’re both wrong; what we need is ultrastrongmachines.” What do you think of these arguments? Exercise 10 Analyze the potential threats from AI technology to society. Whatthreats are most serious, and how might they be combated? How do theycompare to the potential benefits? Exercise 11 How do the potential threats from AI technology compare with those fromother computer science technologies, and to bio-, nano-, and nucleartechnologies? Exercise 12 Some critics object that AI is impossible, while others object that itis *too* possible and that ultraintelligent machines pose athreat. Which of these objections do you think is more likely? Would itbe a contradiction for someone to hold both positions? ", + "content" : " Exercise 1 Define in your own words: (a) intelligence, (b) artificial intelligence,(c) agent, (d) rationality, (e) logical reasoning. Exercise 2 Read Turing’s original paper on AI Turing:1950 .In the paper, he discusses several objections to his proposed enterprise and his test forintelligence. Which objections still carry weight? Are his refutationsvalid? Can you think of new objections arising from developments sincehe wrote the paper? In the paper, he predicts that, by the year 2000, acomputer will have a 30% chance of passing a five-minute Turing Testwith an unskilled interrogator. What chance do you think a computerwould have today? In another 50 years? Exercise 3 Every year the Loebner Prize is awarded to the program that comesclosest to passing a version of the Turing Test. Research and report onthe latest winner of the Loebner prize. What techniques does it use? Howdoes it advance the state of the art in AI? Exercise 4 Are reflex actions (such as flinching from a hot stove) rational? Arethey intelligent? Exercise 5 There are well-known classes of problems that are intractably difficultfor computers, and other classes that are provably undecidable. Doesthis mean that AI is impossible? Exercise 6 Suppose we extend Evans’s SYSTEM program so that it can score 200 on a standardIQ test. Would we then have a program more intelligent than a human?Explain. Exercise 7 The neural structure of the sea slug Aplysis has beenwidely studied (first by Nobel Laureate Eric Kandel) because it has onlyabout 20,000 neurons, most of them large and easily manipulated.Assuming that the cycle time for an Aplysis neuron isroughly the same as for a human neuron, how does the computationalpower, in terms of memory updates per second, compare with the high-endcomputer described in (Figure computer-brain-table)? Exercise 8 How could introspection—reporting on one’s inner thoughts—be inaccurate?Could I be wrong about what I’m thinking? Discuss. Exercise 9 To what extent are the following computer systems instances ofartificial intelligence:- Supermarket bar code scanners.- Web search engines.- Voice-activated telephone menus.- Internet routing algorithms that respond dynamically to the state of the network. Exercise 10 To what extent are the following computer systems instances ofartificial intelligence:- Supermarket bar code scanners.- Voice-activated telephone menus.- Spelling and grammar correction features in Microsoft Word.- Internet routing algorithms that respond dynamically to the state of the network. Exercise 11 Many of the computational models of cognitive activities that have beenproposed involve quite complex mathematical operations, such asconvolving an image with a Gaussian or finding a minimum of the entropyfunction. Most humans (and certainly all animals) never learn this kindof mathematics at all, almost no one learns it before college, andalmost no one can compute the convolution of a function with a Gaussianin their head. What sense does it make to say that the “vision system”is doing this kind of mathematics, whereas the actual person has no ideahow to do it? Exercise 12 Some authors have claimed that perception and motor skills are the mostimportant part of intelligence, and that “higher level” capacities arenecessarily parasitic—simple add-ons to these underlying facilities.Certainly, most of evolution and a large part of the brain have beendevoted to perception and motor skills, whereas AI has found tasks suchas game playing and logical inference to be easier, in many ways, thanperceiving and acting in the real world. Do you think that AI’straditional focus on higher-level cognitive abilities is misplaced? Exercise 13 Why would evolution tend to result in systems that act rationally? Whatgoals are such systems designed to achieve? Exercise 14 Is AI a science, or is it engineering? Or neither or both? Explain. Exercise 15 “Surely computers cannot be intelligent—they can do only what theirprogrammers tell them.” Is the latter statement true, and does it implythe former? Exercise 16 “Surely animals cannot be intelligent—they can do only what their genestell them.” Is the latter statement true, and does it imply the former? Exercise 17 “Surely animals, humans, and computers cannot be intelligent—they can doonly what their constituent atoms are told to do by the laws ofphysics.” Is the latter statement true, and does it imply the former? Exercise 18 Examine the AI literature to discover whether the following tasks cancurrently be solved by computers:- Playing a decent game of table tennis (Ping-Pong).- Driving in the center of Cairo, Egypt.- Driving in Victorville, California.- Buying a week’s worth of groceries at the market.- Buying a week’s worth of groceries on the Web.- Playing a decent game of bridge at a competitive level.- Discovering and proving new mathematical theorems.- Writing an intentionally funny story.- Giving competent legal advice in a specialized area of law.- Translating spoken English into spoken Swedish in real time.- Performing a complex surgical operation. Exercise 19 For the currently infeasible tasks, try to find out what thedifficulties are and predict when, if ever, they will be overcome. Exercise 20 Various subfields of AI have held contests by defining a standard taskand inviting researchers to do their best. Examples include the DARPAGrand Challenge for robotic cars, the International PlanningCompetition, the Robocup robotic soccer league, the TREC informationretrieval event, and contests in machine translation and speechrecognition. Investigate five of these contests and describe theprogress made over the years. To what degree have the contests advancedthe state of the art in AI? To what degree do they hurt the field bydrawing energy away from new ideas? Exercise 21 Suppose that the performance measure is concerned with just the first$T$ time steps of the environment and ignores everything thereafter.Show that a rational agent’s action may depend not just on the state ofthe environment but also on the time step it has reached. Exercise 22 (vacuum-rationality-exercise) Let us examine the rationality of variousvacuum-cleaner agent functions.1. Show that the simple vacuum-cleaner agent function described in Figure vacuum-agent-function-table is indeed rational under the assumptions listed on page vacuum-rationality-page2. Describe a rational agent function for the case in which each movement costs one point. Does the corresponding agent program require internal state?3. Discuss possible agent designs for the cases in which clean squares can become dirty and the geography of the environment is unknown. Does it make sense for the agent to learn from its experience in these cases? If so, what should it learn? If not, why not? Exercise 23 Write an essay on the relationship between evolution and one or more ofautonomy, intelligence, and learning. Exercise 24 For each of the following assertions, say whether it is true or falseand support your answer with examples or counterexamples whereappropriate.1. An agent that senses only partial information about the state cannot be perfectly rational.2. There exist task environments in which no pure reflex agent can behave rationally.3. There exists a task environment in which every agent is rational.4. The input to an agent program is the same as the input to the agent function.5. Every agent function is implementable by some program/machine combination.6. Suppose an agent selects its action uniformly at random from the set of possible actions. There exists a deterministic task environment in which this agent is rational.7. It is possible for a given agent to be perfectly rational in two distinct task environments.8. Every agent is rational in an unobservable environment.9. A perfectly rational poker-playing agent never loses. Exercise 25 (PEAS-exercise) For each of the following activities, give a PEASdescription of the task environment and characterize it in terms of theproperties listed in Section env-properties-subsection- Playing soccer.- Exploring the subsurface oceans of Titan.- Shopping for used AI books on the Internet.- Playing a tennis match.- Practicing tennis against a wall.- Performing a high jump.- Knitting a sweater.- Bidding on an item at an auction. Exercise 26 For each of the following activities, give a PEASdescription of the task environment and characterize it in terms of theproperties listed in Section env-properties-subsection- Performing a gymnastics floor routine.- Exploring the subsurface oceans of Titan.- Playing soccer.- Shopping for used AI books on the Internet.- Practicing tennis against a wall.- Performing a high jump.- Bidding on an item at an auction. Exercise 27 (agent-fn-prog-exercise) Define in your own words the following terms: agent, agent function,agent program, rationality, autonomy, reflex agent, model-based agent,goal-based agent, utility-based agent, learning agent. Exercise 28 This exercise explores the differences betweenagent functions and agent programs.1. Can there be more than one agent program that implements a given agent function? Give an example, or show why one is not possible.2. Are there agent functions that cannot be implemented by any agent program?3. Given a fixed machine architecture, does each agent program implement exactly one agent function?4. Given an architecture with $n$ bits of storage, how many different possible agent programs are there?5. Suppose we keep the agent program fixed but speed up the machine by a factor of two. Does that change the agent function? Exercise 29 Write pseudocode agent programs for the goal-based and utility-basedagents. The following exercises all concern the implementation of environmentsand agents for the vacuum-cleaner world. Exercise 30 (vacuum-start-exercise) Consider a simple thermostat that turns on a furnace when thetemperature is at least 3 degrees below the setting, and turns off afurnace when the temperature is at least 3 degrees above the setting. Isa thermostat an instance of a simple reflex agent, a model-based reflexagent, or a goal-based agent? Exercise 31 Implement a performance-measuring environmentsimulator for the vacuum-cleaner world depicted inFigure vacuum-world-figure and specified onpage vacuum-rationality-page. Your implementation should be modular so that thesensors, actuators, and environment characteristics (size, shape, dirtplacement, etc.) can be changed easily. (Note: for somechoices of programming language and operating system there are alreadyimplementations in the online code repository.) Exercise 32 (vacuum-motion-penalty-exercise) Implement a simple reflex agent for the vacuum environment inExercise vacuum-start-exercise. Run the environmentwith this agent for all possible initial dirt configurations and agentlocations. Record the performance score for each configuration and theoverall average score. Exercise 33 (vacuum-unknown-geog-exercise) Consider a modified version of thevacuum environment in Exercise vacuum-start-exercise,in which the agent is penalized one point for each movement.1. Can a simple reflex agent be perfectly rational for this environment? Explain.2. What about a reflex agent with state? Design such an agent.3. How do your answers to 1 and 2 change if the agent’s percepts give it the clean/dirty status of every square in the environment? Exercise 34 (vacuum-bump-exercise) Consider a modified version of thevacuum environment in Exercise vacuum-start-exercise,in which the geography of the environment—its extent, boundaries, andobstacles—is unknown, as is the initial dirt configuration. (The agentcan go Up and Down as well as Left and Right.)1. Can a simple reflex agent be perfectly rational for this environment? Explain.2. Can a simple reflex agent with a randomized agent function outperform a simple reflex agent? Design such an agent and measure its performance on several environments.3. Can you design an environment in which your randomized agent will perform poorly? Show your results.4. Can a reflex agent with state outperform a simple reflex agent? Design such an agent and measure its performance on several environments. Can you design a rational agent of this type? Exercise 35 (vacuum-finish-exercise) Repeat Exercise vacuum-unknown-geog-exercise for the case inwhich the location sensor is replaced with a “bump” sensor that detectsthe agent’s attempts to move into an obstacle or to cross the boundariesof the environment. Suppose the bump sensor stops working; how shouldthe agent behave? Exercise 36 Explain why problem formulation must follow goal formulation. Exercise 37 Give a complete problem formulation for each of the following problems.Choose a formulation that is precise enough to be implemented.1. There are six glass boxes in a row, each with a lock. Each of the first five boxes holds a key unlocking the next box in line; the last box holds a banana. You have the key to the first box, and you want the banana.2. You start with the sequence ABABAECCEC, or in general any sequence made from A, B, C, and E. You can transform this sequence using the following equalities: AC = E, AB = BC, BB = E, and E$x$ = $x$ for any $x$. For example, ABBC can be transformed into AEC, and then AC, and then E. Your goal is to produce the sequence E.3. There is an $n times n$ grid of squares, each square initially being either unpainted floor or a bottomless pit. You start standing on an unpainted floor square, and can either paint the square under you or move onto an adjacent unpainted floor square. You want the whole floor painted.4. A container ship is in port, loaded high with containers. There 13 rows of containers, each 13 containers wide and 5 containers tall. You control a crane that can move to any location above the ship, pick up the container under it, and move it onto the dock. You want the ship unloaded. Exercise 38 Your goal is to navigate a robot out of a maze. The robot starts in thecenter of the maze facing north. You can turn the robot to face north,east, south, or west. You can direct the robot to move forward a certaindistance, although it will stop before hitting a wall.1. Formulate this problem. How large is the state space?2. In navigating a maze, the only place we need to turn is at the intersection of two or more corridors. Reformulate this problem using this observation. How large is the state space now?3. From each point in the maze, we can move in any of the four directions until we reach a turning point, and this is the only action we need to do. Reformulate the problem using these actions. Do we need to keep track of the robot’s orientation now?4. In our initial description of the problem we already abstracted from the real world, restricting actions and removing details. List three such simplifications we made. Exercise 39 You have a $9 times 9$ grid of squares, each of which can be coloredred or blue. The grid is initially colored all blue, but you can changethe color of any square any number of times. Imagining the grid dividedinto nine $3 times 3$ sub-squares, you want each sub-square to be allone color but neighboring sub-squares to be different colors.1. Formulate this problem in the straightforward way. Compute the size of the state space.2. You need color a square only once. Reformulate, and compute the size of the state space. Would breadth-first graph search perform faster on this problem than on the one in (a)? How about iterative deepening tree search?3. Given the goal, we need consider only colorings where each sub-square is uniformly colored. Reformulate the problem and compute the size of the state space.4. How many solutions does this problem have?5. Parts (b) and (c) successively abstracted the original problem (a). Can you give a translation from solutions in problem (c) into solutions in problem (b), and from solutions in problem (b) into solutions for problem (a)? Exercise 40 (two-friends-exercise) Suppose two friends live in different cities ona map, such as the Romania map shown in . On every turn, we cansimultaneously move each friend to a neighboring city on the map. Theamount of time needed to move from city $i$ to neighbor $j$ is equal tothe road distance $d(i,j)$ between the cities, but on each turn thefriend that arrives first must wait until the other one arrives (andcalls the first on his/her cell phone) before the next turn can begin.We want the two friends to meet as quickly as possible.1. Write a detailed formulation for this search problem. (You will find it helpful to define some formal notation here.)2. Let $D(i,j)$ be the straight-line distance between cities $i$ and $j$. Which of the following heuristic functions are admissible? (i) $D(i,j)$; (ii) $2cdot D(i,j)$; (iii) $D(i,j)/2$. 3. Are there completely connected maps for which no solution exists? 4. Are there maps in which all solutions require one friend to visit the same city twice? Exercise 41 (8puzzle-parity-exercise) Show that the 8-puzzle states are dividedinto two disjoint sets, such that any state is reachable from any otherstate in the same set, while no state is reachable from any state in theother set. (Hint: See Berlekamp+al:1982) Devise a procedure to decidewhich set a given state is in, and explain why this is useful forgenerating random states. Exercise 42 (nqueens-size-exercise) Consider the $n$-queens problem using the“efficient” incremental formulation given on page nqueens-page. Explain why the statespace has at least $sqrt[3]{n!}$ states and estimate the largest $n$for which exhaustive exploration is feasible. (Hint:Derive a lower bound on the branching factor by considering the maximumnumber of squares that a queen can attack in any column.) Exercise 43 Give a complete problem formulation for each of the following. Choose aformulation that is precise enough to be implemented.1. Using only four colors, you have to color a planar map in such a way that no two adjacent regions have the same color.2. A 3-foot-tall monkey is in a room where some bananas are suspended from the 8-foot ceiling. He would like to get the bananas. The room contains two stackable, movable, climbable 3-foot-high crates.3. You have a program that outputs the message “illegal input record” when fed a certain file of input records. You know that processing of each record is independent of the other records. You want to discover what record is illegal.4. You have three jugs, measuring 12 gallons, 8 gallons, and 3 gallons, and a water faucet. You can fill the jugs up or empty them out from one to another or onto the ground. You need to measure out exactly one gallon. Exercise 44 (path-planning-exercise) Consider the problem of finding the shortestpath between two points on a plane that has convex polygonal obstaclesas shown in . This is an idealization of the problem that a robot has tosolve to navigate in a crowded environment.1. Suppose the state space consists of all positions $(x,y)$ in the plane. How many states are there? How many paths are there to the goal?2. Explain briefly why the shortest path from one polygon vertex to any other in the scene must consist of straight-line segments joining some of the vertices of the polygons. Define a good state space now. How large is this state space?3. Define the necessary functions to implement the search problem, including an function that takes a vertex as input and returns a set of vectors, each of which maps the current vertex to one of the vertices that can be reached in a straight line. (Do not forget the neighbors on the same polygon.) Use the straight-line distance for the heuristic function.4. Apply one or more of the algorithms in this chapter to solve a range of problems in the domain, and comment on their performance. Exercise 45 (negative-g-exercise) On page non-negative-g, we said that we would not consider problemswith negative path costs. In this exercise, we explore this decision inmore depth.1. Suppose that actions can have arbitrarily large negative costs; explain why this possibility would force any optimal algorithm to explore the entire state space.2. Does it help if we insist that step costs must be greater than or equal to some negative constant $c$? Consider both trees and graphs.3. Suppose that a set of actions forms a loop in the state space such that executing the set in some order results in no net change to the state. If all of these actions have negative cost, what does this imply about the optimal behavior for an agent in such an environment?4. One can easily imagine actions with high negative cost, even in domains such as route finding. For example, some stretches of road might have such beautiful scenery as to far outweigh the normal costs in terms of time and fuel. Explain, in precise terms, within the context of state-space search, why humans do not drive around scenic loops indefinitely, and explain how to define the state space and actions for route finding so that artificial agents can also avoid looping.5. Can you think of a real domain in which step costs are such as to cause looping? Exercise 46 (mc-problem) The problem is usually stated as follows. Threemissionaries and three cannibals are on one side of a river, along witha boat that can hold one or two people. Find a way to get everyone tothe other side without ever leaving a group of missionaries in one placeoutnumbered by the cannibals in that place. This problem is famous in AIbecause it was the subject of the first paper that approached problemformulation from an analytical viewpoint Amarel:1968. 1. Formulate the problem precisely, making only those distinctions necessary to ensure a valid solution. Draw a diagram of the complete state space.2. Implement and solve the problem optimally using an appropriate search algorithm. Is it a good idea to check for repeated states? 3. Why do you think people have a hard time solving this puzzle, given that the state space is so simple? Exercise 47 Define in your own words the following terms: state, state space, searchtree, search node, goal, action, transition model, and branching factor. Exercise 48 What’s the difference between a world state, a state description, and asearch node? Why is this distinction useful? Exercise 49 An action such as really consists of a long sequence of finer-grainedactions: turn on the car, release the brake, accelerate forward, etc.Having composite actions of this kind reduces the number of steps in asolution sequence, thereby reducing the search time. Suppose we takethis to the logical extreme, by making super-composite actions out ofevery possible sequence of actions. Then every problem instance issolved by a single super-composite action, such as . Explain how searchwould work in this formulation. Is this a practical approach forspeeding up problem solving? Exercise 50 Does a finite state space always lead to a finite search tree? How abouta finite state space that is a tree? Can you be more precise about whattypes of state spaces always lead to finite search trees? (Adapted from, 1996.) Exercise 51 (graph-separation-property-exercise) Prove that satisfies the graphseparation property illustrated in . (Hint: Begin byshowing that the property holds at the start, then show that if it holdsbefore an iteration of the algorithm, it holds afterwards.) Describe asearch algorithm that violates the property. Exercise 52 Which of the following are true and which are false? Explain youranswers.1. Depth-first search always expands at least as many nodes as A search with an admissible heuristic. 2. $h(n)=0$ is an admissible heuristic for the 8-puzzle. 3. A is of no use in robotics because percepts, states, and actions are continuous.4. Breadth-first search is complete even if zero step costs are allowed. 5. Assume that a rook can move on a chessboard any number of squares in a straight line, vertically or horizontally, but cannot jump over other pieces. Manhattan distance is an admissible heuristic for the problem of moving the rook from square A to square B in the smallest number of moves. Exercise 53 Consider a state space where the start state is number 1 and each state$k$ has two successors: numbers $2k$ and $2k+1$. 1. Draw the portion of the state space for states 1 to 15. 2. Suppose the goal state is 11. List the order in which nodes will be visited for breadth-first search, depth-limited search with limit 3, and iterative deepening search. 3. How well would bidirectional search work on this problem? What is the branching factor in each direction of the bidirectional search?4. Does the answer to (c) suggest a reformulation of the problem that would allow you to solve the problem of getting from state 1 to a given goal state with almost no search? 5. Call the action going from $k$ to $2k$ Left, and the action going to $2k+1$ Right. Can you find an algorithm that outputs the solution to this problem without any search at all? Exercise 54 (brio-exercise) A basic wooden railway set contains the pieces shown in. The task is to connect these pieces into a railway that has nooverlapping tracks and no loose ends where a train could run off ontothe floor.1. Suppose that the pieces fit together exactly with no slack. Give a precise formulation of the task as a search problem.2. Identify a suitable uninformed search algorithm for this task and explain your choice.3. Explain why removing any one of the “fork” pieces makes the problem unsolvable. 4. Give an upper bound on the total size of the state space defined by your formulation. (Hint: think about the maximum branching factor for the construction process and the maximum depth, ignoring the problem of overlapping pieces and loose ends. Begin by pretending that every piece is unique.) Exercise 55 Implement two versions of the function for the 8-puzzle: one that copiesand edits the data structure for the parent node $s$ and one thatmodifies the parent state directly (undoing the modifications asneeded). Write versions of iterative deepening depth-first search thatuse these functions and compare their performance. Exercise 56 (iterative-lengthening-exercise) On page iterative-lengthening-page,we mentioned iterative lengthening search,an iterative analog of uniform cost search. The idea is to use increasing limits onpath cost. If a node is generated whose path cost exceeds the currentlimit, it is immediately discarded. For each new iteration, the limit isset to the lowest path cost of any node discarded in the previousiteration.1. Show that this algorithm is optimal for general path costs.2. Consider a uniform tree with branching factor $b$, solution depth $d$, and unit step costs. How many iterations will iterative lengthening require?3. Now consider step costs drawn from the continuous range $[epsilon,1]$, where $0 &lt; epsilon &lt; 1$. How many iterations are required in the worst case? 4. Implement the algorithm and apply it to instances of the 8-puzzle and traveling salesperson problems. Compare the algorithm’s performance to that of uniform-cost search, and comment on your results. Exercise 57 Describe a state space in which iterative deepening search performs muchworse than depth-first search (for example, $O(n^{2})$ vs. $O(n)$). Exercise 58 Write a program that will take as input two Web page URLs and find apath of links from one to the other. What is an appropriate searchstrategy? Is bidirectional search a good idea? Could a search engine beused to implement a predecessor function? Exercise 59 (vacuum-search-exercise) Consider the vacuum-world problem defined in .1. Which of the algorithms defined in this chapter would be appropriate for this problem? Should the algorithm use tree search or graph search?2. Apply your chosen algorithm to compute an optimal sequence of actions for a $3times 3$ world whose initial state has dirt in the three top squares and the agent in the center.3. Construct a search agent for the vacuum world, and evaluate its performance in a set of $3times 3$ worlds with probability 0.2 of dirt in each square. Include the search cost as well as path cost in the performance measure, using a reasonable exchange rate.4. Compare your best search agent with a simple randomized reflex agent that sucks if there is dirt and otherwise moves randomly.5. Consider what would happen if the world were enlarged to $n times n$. How does the performance of the search agent and of the reflex agent vary with $n$? Exercise 60 (search-special-case-exercise) Prove each of the following statements,or give a counterexample: 1. Breadth-first search is a special case of uniform-cost search.2. Depth-first search is a special case of best-first tree search.3. Uniform-cost search is a special case of A search. Exercise 61 Compare the performance of A and RBFS on a set of randomly generatedproblems in the 8-puzzle (with Manhattan distance) and TSP (with MST—see) domains. Discuss your results. What happens to the performance of RBFSwhen a small random number is added to the heuristic values in the8-puzzle domain? Exercise 62 Trace the operation of A search applied to the problem of getting toBucharest from Lugoj using the straight-line distance heuristic. Thatis, show the sequence of nodes that the algorithm will consider and the$f$, $g$, and $h$ score for each node. Exercise 63 Sometimes there is no good evaluation function for a problem but thereis a good comparison method: a way to tell whether one node is betterthan another without assigning numerical values to either. Show thatthis is enough to do a best-first search. Is there an analog of A forthis setting? Exercise 64 (failure-exercise) Devise a state space in which A using returns asuboptimal solution with an $h(n)$ function that is admissible butinconsistent. Exercise 65 Accurate heuristics don’t necessarily reduce search time in the worstcase. Given any depth $d$, define a search problem with a goal node atdepth $d$, and write a heuristic function such that $|h(n) - h^*(n)| le O(log h^*(n))$ but $A^*$ expands all nodes of depth lessthan $d$. Exercise 66 The heuristic path algorithm Pohl:1977 is a best-first search in which the evaluation functionis $f(n) =(2-w)g(n) + wh(n)$. For what values of $w$ is this complete? For whatvalues is it optimal, assuming that $h$ is admissible? What kind ofsearch does this perform for $w=0$, $w=1$, and $w=2$? Exercise 67 Consider the unbounded version of the regular 2D grid shown in . Thestart state is at the origin, (0,0), and the goal state is at $(x,y)$.1. What is the branching factor $b$ in this state space?2. How many distinct states are there at depth $k$ (for $k&gt;0$)?3. What is the maximum number of nodes expanded by breadth-first tree search?4. What is the maximum number of nodes expanded by breadth-first graph search?5. Is $h = |u-x| + |v-y|$ an admissible heuristic for a state at $(u,v)$? Explain.6. How many nodes are expanded by A graph search using $h$?7. Does $h$ remain admissible if some links are removed?8. Does $h$ remain admissible if some links are added between nonadjacent states? Exercise 68 $n$ vehicles occupy squares $(1,1)$ through $(n,1)$ (i.e., the bottomrow) of an $ntimes n$ grid. The vehicles must be moved to the top rowbut in reverse order; so the vehicle $i$ that starts in $(i,1)$ must endup in $(n-i+1,n)$. On each time step, every one of the $n$ vehicles canmove one square up, down, left, or right, or stay put; but if a vehiclestays put, one other adjacent vehicle (but not more than one) can hopover it. Two vehicles cannot occupy the same square. 1. Calculate the size of the state space as a function of $n$.2. Calculate the branching factor as a function of $n$.3. Suppose that vehicle $i$ is at $(x_i,y_i)$; write a nontrivial admissible heuristic $h_i$ for the number of moves it will require to get to its goal location $(n-i+1,n)$, assuming no other vehicles are on the grid.4. Which of the following heuristics are admissible for the problem of moving all $n$ vehicles to their destinations? Explain. 1. $sum_{i= 1}^{n} h_i$. 2. $max{h_1,ldots,h_n}$. 3. $min{h_1,ldots,h_n}$. Exercise 69 Consider the problem of moving $k$ knights from $k$ starting squares$s_1,ldots,s_k$ to $k$ goal squares $g_1,ldots,g_k$, on an unboundedchessboard, subject to the rule that no two knights can land on the samesquare at the same time. Each action consists of moving upto $k$ knights simultaneously. We would like to complete themaneuver in the smallest number of actions.1. What is the maximum branching factor in this state space, expressed as a function of $k$?2. Suppose $h_i$ is an admissible heuristic for the problem of moving knight $i$ to goal $g_i$ by itself. Which of the following heuristics are admissible for the $k$-knight problem? Of those, which is the best? 1. $min{h_1,ldots,h_k}$. 2. $max{h_1,ldots,h_k}$. 3. $sum_{i= 1}^{k} h_i$.3. Repeat (b) for the case where you are allowed to move only one knight at a time. Exercise 70 We saw on page I-to-F that the straight-line distance heuristic leads greedybest-first search astray on the problem of going from Iasi to Fagaras.However, the heuristic is perfect on the opposite problem: going fromFagaras to Iasi. Are there problems for which the heuristic ismisleading in both directions? Exercise 71 Invent a heuristic function for the 8-puzzle that sometimesoverestimates, and show how it can lead to a suboptimal solution on aparticular problem. (You can use a computer to help if you want.) Provethat if $h$ never overestimates by more than $c$, A using $h$ returns asolution whose cost exceeds that of the optimal solution by no more than$c$. Exercise 72 Prove that if a heuristic isconsistent, it must be admissible. Construct an admissible heuristicthat is not consistent. Exercise 73 The traveling salesperson problem (TSP) can besolved with the minimum-spanning-tree (MST) heuristic, which estimatesthe cost of completing a tour, given that a partial tour has alreadybeen constructed. The MST cost of a set of cities is the smallest sum ofthe link costs of any tree that connects all the cities.1. Show how this heuristic can be derived from a relaxed version of the TSP.2. Show that the MST heuristic dominates straight-line distance.3. Write a problem generator for instances of the TSP where cities are represented by random points in the unit square.4. Find an efficient algorithm in the literature for constructing the MST, and use it with A graph search to solve instances of the TSP. Exercise 74 (Gaschnig-h-exercise) On page Gaschnig-h-page , we defined the relaxation of the 8-puzzle inwhich a tile can move from square A to square B if B is blank. The exactsolution of this problem defines Gaschnig's heuristic Gaschnig:1979. Explain why Gaschnig’sheuristic is at least as accurate as $h_1$ (misplaced tiles), and showcases where it is more accurate than both $h_1$ and $h_2$ (Manhattandistance). Explain how to calculate Gaschnig’s heuristic efficiently. Exercise 75 We gave two simple heuristics for the 8-puzzle: Manhattan distance andmisplaced tiles. Several heuristics in the literature purport to improveon this—see, for example, Nilsson:1971,Mostow+Prieditis:1989, and Hansson+al:1992. Test these claims by implementingthe heuristics and comparing the performance of the resultingalgorithms. Exercise 1 Give the name of the algorithm that results from each of the followingspecial cases:1. Local beam search with $k = 1$.2. Local beam search with one initial state and no limit on the number of states retained.3. Simulated annealing with $T = 0$ at all times (and omitting the termination test).4. Simulated annealing with $T=infty$ at all times.5. Genetic algorithm with population size $N = 1$. Exercise 2 Exercise brio-exercise considers the problem ofbuilding railway tracks under the assumption that pieces fit exactlywith no slack. Now consider the real problem, in which pieces don’t fitexactly but allow for up to 10 degrees of rotation to either side of the“proper” alignment. Explain how to formulate the problem so it could besolved by simulated annealing. Exercise 3 In this exercise, we explore the use of local search methods to solveTSPs of the type defined in Exercise tsp-mst-exercise1. Implement and test a hill-climbing method to solve TSPs. Compare the results with optimal solutions obtained from the A* algorithm with the MST heuristic (Exercise tsp-mst-exercise)2. Repeat part (a) using a genetic algorithm instead of hill climbing. You may want to consult @Larranaga+al:1999 for some suggestions for representations. Exercise 4 (hill-climbing-exercise) Generate a large number of 8-puzzle and8-queens instances and solve them (where possible) by hill climbing(steepest-ascent and first-choice variants), hill climbing with randomrestart, and simulated annealing. Measure the search cost and percentageof solved problems and graph these against the optimal solution cost.Comment on your results. Exercise 5 (cond-plan-repeated-exercise) The And-Or-Graph-Search algorithm inFigure and-or-graph-search-algorithm checks forrepeated states only on the path from the root to the current state.Suppose that, in addition, the algorithm were to storeevery visited state and check against that list. (See inFigure breadth-first-search-algorithm for an example.)Determine the information that should be stored and how the algorithmshould use that information when a repeated state is found.(*Hint*: You will need to distinguish at least betweenstates for which a successful subplan was constructed previously andstates for which no subplan could be found.) Explain how to use labels,as defined in Section cyclic-plan-section, to avoidhaving multiple copies of subplans. Exercise 6 (cond-loop-exercise) Explain precisely how to modify the And-Or-Graph-Search algorithm togenerate a cyclic plan if no acyclic plan exists. You will need to dealwith three issues: labeling the plan steps so that a cyclic plan canpoint back to an earlier part of the plan, modifying Or-Search so that itcontinues to look for acyclic plans after finding a cyclic plan, andaugmenting the plan representation to indicate whether a plan is cyclic.Show how your algorithm works on (a) the slippery vacuum world, and (b)the slippery, erratic vacuum world. You might wish to use a computerimplementation to check your results. Exercise 7 In Section conformant-section we introduced beliefstates to solve sensorless search problems. A sequence of actions solvesa sensorless problem if it maps every physical state in the initialbelief state $b$ to a goal state. Suppose the agent knows $h^*(s)$, thetrue optimal cost of solving the physical state $s$ in the fullyobservable problem, for every state $s$ in $b$. Find an admissibleheuristic $h(b)$ for the sensorless problem in terms of these costs, andprove its admissibilty. Comment on the accuracy of this heuristic on thesensorless vacuum problem ofFigure vacuum2-sets-figure. How well does A* perform? Exercise 8 (belief-state-superset-exercise) This exercise exploressubset–superset relations between belief states in sensorless orpartially observable environments.1. Prove that if an action sequence is a solution for a belief state $b$, it is also a solution for any subset of $b$. Can anything be said about supersets of $b$?2. Explain in detail how to modify graph search for sensorless problems to take advantage of your answers in (a).3. Explain in detail how to modify and–or search for partially observable problems, beyond the modifications you describe in (b). Exercise 9 (multivalued-sensorless-exercise) On page multivalued-sensorless-page it was assumedthat a given action would have the same cost when executed in anyphysical state within a given belief state. (This leads to abelief-state search problem with well-defined step costs.) Now considerwhat happens when the assumption does not hold. Does the notion ofoptimality still make sense in this context, or does it requiremodification? Consider also various possible definitions of the “cost”of executing an action in a belief state; for example, we could use theminimum of the physical costs; or themaximum; or a cost interval with the lowerbound being the minimum cost and the upper bound being the maximum; orjust keep the set of all possible costs for that action. For each ofthese, explore whether A* (with modifications if necessary) can returnoptimal solutions. Exercise 10 (vacuum-solvable-exercise) Consider the sensorless version of theerratic vacuum world. Draw the belief-state space reachable from theinitial belief state ${1,2,3,4,5,6,7,8}$, and explain why theproblem is unsolvable. Exercise 11 (vacuum-solvable-exercise) Consider the sensorless version of theerratic vacuum world. Draw the belief-state space reachable from theinitial belief state ${ 1,3,5,7 }$, and explain why the problemis unsolvable. Exercise 12 (path-planning-agent-exercise) We can turn the navigation problem inExercise path-planning-exercise into an environment asfollows:- The percept will be a list of the positions, relative to the agent, of the visible vertices. The percept does not include the position of the robot! The robot must learn its own position from the map; for now, you can assume that each location has a different “view.”- Each action will be a vector describing a straight-line path to follow. If the path is unobstructed, the action succeeds; otherwise, the robot stops at the point where its path first intersects an obstacle. If the agent returns a zero motion vector and is at the goal (which is fixed and known), then the environment teleports the agent to a random location (not inside an obstacle).- The performance measure charges the agent 1 point for each unit of distance traversed and awards 1000 points each time the goal is reached.1. Implement this environment and a problem-solving agent for it. After each teleportation, the agent will need to formulate a new problem, which will involve discovering its current location.2. Document your agent’s performance (by having the agent generate suitable commentary as it moves around) and report its performance over 100 episodes.3. Modify the environment so that 30% of the time the agent ends up at an unintended destination (chosen randomly from the other visible vertices if any; otherwise, no move at all). This is a crude model of the motion errors of a real robot. Modify the agent so that when such an error is detected, it finds out where it is and then constructs a plan to get back to where it was and resume the old plan. Remember that sometimes getting back to where it was might also fail! Show an example of the agent successfully overcoming two successive motion errors and still reaching the goal.4. Now try two different recovery schemes after an error: (1) head for the closest vertex on the original route; and (2) replan a route to the goal from the new location. Compare the performance of the three recovery schemes. Would the inclusion of search costs affect the comparison?5. Now suppose that there are locations from which the view is identical. (For example, suppose the world is a grid with square obstacles.) What kind of problem does the agent now face? What do solutions look like? Exercise 13 (online-offline-exercise) Suppose that an agent is in a $3 times 3$maze environment like the one shown inFigure maze-3x3-figure. The agent knows that itsinitial location is (1,1), that the goal is at (3,3), and that theactions Up, Down, Left, Right have their usualeffects unless blocked by a wall. The agent does not knowwhere the internal walls are. In any given state, the agent perceivesthe set of legal actions; it can also tell whether the state is one ithas visited before.1. Explain how this online search problem can be viewed as an offline search in belief-state space, where the initial belief state includes all possible environment configurations. How large is the initial belief state? How large is the space of belief states?2. How many distinct percepts are possible in the initial state?3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?Notice that this contingency plan is a solution for everypossible environment fitting the given description. Therefore,interleaving of search and execution is not strictly necessary even inunknown environments. Exercise 14 (online-offline-exercise) Suppose that an agent is in a $3 times 3$maze environment like the one shown inFigure maze-3x3-figure. The agent knows that itsinitial location is (3,3), that the goal is at (1,1), and that the fouractions *Up*, *Down*, *Left*, *Right* have their usualeffects unless blocked by a wall. The agent does *not* knowwhere the internal walls are. In any given state, the agent perceivesthe set of legal actions; it can also tell whether the state is one ithas visited before or is a new state.1. Explain how this online search problem can be viewed as an offline search in belief-state space, where the initial belief state includes all possible environment configurations. How large is the initial belief state? How large is the space of belief states?2. How many distinct percepts are possible in the initial state?3. Describe the first few branches of a contingency plan for this problem. How large (roughly) is the complete plan?Notice that this contingency plan is a solution for *everypossible environment* fitting the given description. Therefore,interleaving of search and execution is not strictly necessary even inunknown environments. Exercise 15 (path-planning-hc-exercise) In this exercise, we examine hill climbingin the context of robot navigation, using the environment inFigure geometric-scene-figure as an example.1. Repeat Exercise path-planning-agent-exercise using hill climbing. Does your agent ever get stuck in a local minimum? Is it *possible* for it to get stuck with convex obstacles?2. Construct a nonconvex polygonal environment in which the agent gets stuck.3. Modify the hill-climbing algorithm so that, instead of doing a depth-1 search to decide where to go next, it does a depth-$k$ search. It should find the best $k$-step path and do one step along it, and then repeat the process.4. Is there some $k$ for which the new algorithm is guaranteed to escape from local minima?5. Explain how LRTA enables the agent to escape from local minima in this case. Exercise 16 Like DFS, online DFS is incomplete for reversible state spaces withinfinite paths. For example, suppose that states are points on theinfinite two-dimensional grid and actions are unit vectors $(1,0)$,$(0,1)$, $(-1,0)$, $(0,-1)$, tried in that order. Show that online DFSstarting at $(0,0)$ will not reach $(1,-1)$. Suppose the agent canobserve, in addition to its current state, all successor states and theactions that would lead to them. Write an algorithm that is completeeven for bidirected state spaces with infinite paths. What states doesit visit in reaching $(1,-1)$? Exercise 17 Relate the time complexity of LRTA* to its space complexity. Exercise 1 Suppose you have an oracle, $OM(s)$, that correctly predicts theopponent’s move in any state. Using this, formulate the definition of agame as a (single-agent) search problem. Describe an algorithm forfinding the optimal move. Exercise 2 Consider the problem of solving two 8-puzzles.1. Give a complete problem formulation in the style of Chapter search-chapter.2. How large is the reachable state space? Give an exact numerical expression.3. Suppose we make the problem adversarial as follows: the two players take turns moving; a coin is flipped to determine the puzzle on which to make a move in that turn; and the winner is the first to solve one puzzle. Which algorithm can be used to choose a move in this setting?4. Does the game eventually end, given optimal play? Explain.(a) A map where the cost of every edge is 1. Initially the pursuer $P$ is atnode b and the evader $E$ is at node d (b) A partial game tree for this map.Each node is labeled with the $P,E$ positions. $P$ moves first. Branches marked "?" have yet to be explored. Pursuit evasion game Figure Exercise 3 Imagine that, in Exercise two-friends-exercise, one ofthe friends wants to avoid the other. The problem then becomes atwo-player game. We assume now that the players take turns moving. Thegame ends only when the players are on the same node; the terminalpayoff to the pursuer is minus the total time taken. (The evader “wins”by never losing.) An example is shown in Figure.pursuit-evasion-game-figure1. Copy the game tree and mark the values of the terminal nodes.2. Next to each internal node, write the strongest fact you can infer about its value (a number, one or more inequalities such as “$geq 14$”, or a “?”).3. Beneath each question mark, write the name of the node reached by that branch.4. Explain how a bound on the value of the nodes in (c) can be derived from consideration of shortest-path lengths on the map, and derive such bounds for these nodes. Remember the cost to get to each leaf as well as the cost to solve it.5. Now suppose that the tree as given, with the leaf bounds from (d), is evaluated from left to right. Circle those “?” nodes that would not need to be expanded further, given the bounds from part (d), and cross out those that need not be considered at all.6. Can you prove anything in general about who wins the game on a map that is a tree? Exercise 4 (game-playing-chance-exercise) Describe and implement statedescriptions, move generators, terminal tests, utility functions, andevaluation functions for one or more of the following stochastic games:Monopoly, Scrabble, bridge play with a given contract, or Texas hold’empoker. Exercise 5 Describe and implement a real-time,multiplayer game-playing environment, where time is partof the environment state and players are given fixed time allocations. Exercise 6 Discuss how well the standard approach to game playing would apply togames such as tennis, pool, and croquet, which take place in acontinuous physical state space. Exercise 7 (minimax-optimality-exercise) Prove the following assertion: For everygame tree, the utility obtained by max using minimaxdecisions against a suboptimal min will never be lower thanthe utility obtained playing against an optimal min. Canyou come up with a game tree in which max can do stillbetter using a suboptimal strategy against a suboptimalmin?Player $A$ moves first. The two players take turns moving, and eachplayer must move his token to an open adjacent space in eitherdirection. If the opponent occupies an adjacent space, then a playermay jump over the opponent to the next open space if any. (Forexample, if $A$ is on 3 and $B$ is on 2, then $A$ may move back to 1.)The game ends when one player reaches the opposite end of the board.If player $A$ reaches space 4 first, then the value of the game to $A$is $+1$; if player $B$ reaches space 1 first, then the value of thegame to $A$ is $-1$. The starting position of a simple game. Exercise 8 Consider the two-player game described inFigure line-game4-figure1. Draw the complete game tree, using the following conventions: - Write each state as $(s_A,s_B)$, where $s_A$ and $s_B$ denote the token locations. - Put each terminal state in a square box and write its game value in a circle. - Put loop states (states that already appear on the path to the root) in double square boxes. Since their value is unclear, annotate each with a “?” in a circle.2. Now mark each node with its backed-up minimax value (also in a circle). Explain how you handled the “?” values and why.3. Explain why the standard minimax algorithm would fail on this game tree and briefly sketch how you might fix it, drawing on your answer to (b). Does your modified algorithm give optimal decisions for all games with loops?4. This 4-square game can be generalized to $n$ squares for any $n &gt; 2$. Prove that $A$ wins if $n$ is even and loses if $n$ is odd. Exercise 9 This problem exercises the basic concepts of game playing, usingtic-tac-toe (noughts and crosses) as an example. We define$X_n$ as the number of rows, columns, or diagonals with exactly $n$$X$’s and no $O$’s. Similarly, $O_n$ is the number of rows, columns, ordiagonals with just $n$ $O$’s. The utility function assigns $+1$ to anyposition with $X_3=1$ and $-1$ to any position with $O_3 = 1$. All otherterminal positions have utility 0. For nonterminal positions, we use alinear evaluation function defined as ${Eval}(s) = 3X_2(s) + X_1(s) -(3O_2(s) + O_1(s))$. 1. Approximately how many possible games of tic-tac-toe are there?2. Show the whole game tree starting from an empty board down to depth 2 (i.e., one $X$ and one $O$ on the board), taking symmetry into account.3. Mark on your tree the evaluations of all the positions at depth 2.4. Using the minimax algorithm, mark on your tree the backed-up values for the positions at depths 1 and 0, and use those values to choose the best starting move.5. Circle the nodes at depth 2 that would not be evaluated if alpha–beta pruning were applied, assuming the nodes are generated in the optimal order for alpha–beta pruning. Exercise 10 Consider the family of generalized tic-tac-toe games, defined asfollows. Each particular game is specified by a set $mathcal S$ ofsquares and a collection $mathcal W$ of winningpositions. Each winning position is a subset of $mathcal S$.For example, in standard tic-tac-toe, $mathcal S$ is a set of 9 squaresand $mathcal W$ is a collection of 8 subsets of $cal W$: the threerows, the three columns, and the two diagonals. In other respects, thegame is identical to standard tic-tac-toe. Starting from an empty board,players alternate placing their marks on an empty square. A player whomarks every square in a winning position wins the game. It is a tie ifall squares are marked and neither player has won.1. Let $N= |{mathcal S}|$, the number of squares. Give an upper bound on the number of nodes in the complete game tree for generalized tic-tac-toe as a function of $N$.2. Give a lower bound on the size of the game tree for the worst case, where ${mathcal W} = {{,}}$.3. Propose a plausible evaluation function that can be used for any instance of generalized tic-tac-toe. The function may depend on $mathcal S$ and $mathcal W$.4. Assume that it is possible to generate a new board and check whether it is a winning position in 100$N$ machine instructions and assume a 2 gigahertz processor. Ignore memory limitations. Using your estimate in (a), roughly how large a game tree can be completely solved by alpha–beta in a second of CPU time? a minute? an hour? Exercise 11 Develop a general game-playing program, capable of playing a variety ofgames.1. Implement move generators and evaluation functions for one or more of the following games: Kalah, Othello, checkers, and chess.2. Construct a general alpha–beta game-playing agent.3. Compare the effect of increasing search depth, improving move ordering, and improving the evaluation function. How close does your effective branching factor come to the ideal case of perfect move ordering?4. Implement a selective search algorithm, such as B* Berliner:1979, conspiracy number search @McAllester:1988, or MGSS* Russell+Wefald:1989 and compare its performance to A*. Exercise 12 Describe how the minimax and alpha–beta algorithms change fortwo-player, non-zero-sum games in which each player has a distinctutility function and both utility functions are known to both players.If there are no constraints on the two terminal utilities, is itpossible for any node to be pruned by alpha–beta? What if the player’sutility functions on any state differ by at most a constant $k$, makingthe game almost cooperative? Exercise 13 Describe how the minimax and alpha–beta algorithms change fortwo-player, non-zero-sum games in which each player has a distinctutility function and both utility functions are known to both players.If there are no constraints on the two terminal utilities, is itpossible for any node to be pruned by alpha–beta? What if the player’sutility functions on any state sum to a number between constants $-k$and $k$, making the game almost zero-sum? Exercise 14 Develop a formal proof of correctness for alpha–beta pruning. To dothis, consider the situation shown inFigure alpha-beta-proof-figure. The question is whetherto prune node $n_j$, which is a max-node and a descendant of node $n_1$.The basic idea is to prune it if and only if the minimax value of $n_1$can be shown to be independent of the value of $n_j$.1. Mode $n_1$ takes on the minimum value among its children: $n_1 = min(n_2,n_21,ldots,n_{2b_2})$. Find a similar expression for $n_2$ and hence an expression for $n_1$ in terms of $n_j$.2. Let $l_i$ be the minimum (or maximum) value of the nodes to the left of node $n_i$ at depth $i$, whose minimax value is already known. Similarly, let $r_i$ be the minimum (or maximum) value of the unexplored nodes to the right of $n_i$ at depth $i$. Rewrite your expression for $n_1$ in terms of the $l_i$ and $r_i$ values.3. Now reformulate the expression to show that in order to affect $n_1$, $n_j$ must not exceed a certain bound derived from the $l_i$ values.4. Repeat the process for the case where $n_j$ is a min-node. Situation when considering whether to prune node $n_j$. Exercise 15 Prove that the alpha–beta algorithm takes time $O(b^{m/2})$ with optimalmove ordering, where $m$ is the maximum depth of the game tree. Exercise 16 Suppose you have a chess program that can evaluate 5 million nodes persecond. Decide on a compact representation of a game state for storagein a transposition table. About how many entries can you fit in a1-gigabyte in-memory table? Will that be enough for the three minutes ofsearch allocated for one move? How many table lookups can you do in thetime it would take to do one evaluation? Now suppose the transpositiontable is stored on disk. About how many evaluations could you do in thetime it takes to do one disk seek with standard disk hardware? Exercise 17 Suppose you have a chess program that can evaluate 10 million nodes persecond. Decide on a compact representation of a game state for storagein a transposition table. About how many entries can you fit in a2-gigabyte in-memory table? Will that be enough for the three minutes ofsearch allocated for one move? How many table lookups can you do in thetime it would take to do one evaluation? Now suppose the transpositiontable is stored on disk. About how many evaluations could you do in thetime it takes to do one disk seek with standard disk hardware? The complete game tree for a trivial game with chance nodes.. Exercise 18 This question considers pruning in games with chance nodes.Figure trivial-chance-game-figure shows the completegame tree for a trivial game. Assume that the leaf nodes are to beevaluated in left-to-right order, and that before a leaf node isevaluated, we know nothing about its value—the range of possible valuesis $-infty$ to $infty$.1. Copy the figure, mark the value of all the internal nodes, and indicate the best move at the root with an arrow.2. Given the values of the first six leaves, do we need to evaluate the seventh and eighth leaves? Given the values of the first seven leaves, do we need to evaluate the eighth leaf? Explain your answers.3. Suppose the leaf node values are known to lie between –2 and 2 inclusive. After the first two leaves are evaluated, what is the value range for the left-hand chance node?4. Circle all the leaves that need not be evaluated under the assumption in (c). Exercise 19 Implement the expectiminimax algorithm and the *-alpha–beta algorithm,which is described by Ballard:1983, for pruning game trees with chance nodes. Trythem on a game such as backgammon and measure the pruning effectivenessof *-alpha–beta. Exercise 20 (game-linear-transform) Prove that with a positive lineartransformation of leaf values (i.e., transforming a value $x$ to$ax + b$ where $a &gt; 0$), the choice of move remains unchanged in a gametree, even when there are chance nodes. Exercise 21 (game-playing-monte-carlo-exercise) Consider the following procedurefor choosing moves in games with chance nodes:- Generate some dice-roll sequences (say, 50) down to a suitable depth (say, 8).- With known dice rolls, the game tree becomes deterministic. For each dice-roll sequence, solve the resulting deterministic game tree using alpha–beta.- Use the results to estimate the value of each move and to choose the best.Will this procedure work well? Why (or why not)? Exercise 22 In the following, a “max” tree consists only of max nodes, whereas an“expectimax” tree consists of a max node at the root with alternatinglayers of chance and max nodes. At chance nodes, all outcomeprobabilities are nonzero. The goal is to find the value of theroot with a bounded-depth search. For each of (a)–(f), eithergive an example or explain why this is impossible.1. Assuming that leaf values are finite but unbounded, is pruning (as in alpha–beta) ever possible in a max tree?2. Is pruning ever possible in an expectimax tree under the same conditions?3. If leaf values are all nonnegative, is pruning ever possible in a max tree? Give an example, or explain why not.4. If leaf values are all nonnegative, is pruning ever possible in an expectimax tree? Give an example, or explain why not.5. If leaf values are all in the range $[0,1]$, is pruning ever possible in a max tree? Give an example, or explain why not.6. If leaf values are all in the range $[0,1]$, is pruning ever possible in an expectimax tree?17. Consider the outcomes of a chance node in an expectimax tree. Which of the following evaluation orders is most likely to yield pruning opportunities? i. Lowest probability first ii. Highest probability first iii. Doesn’t make any difference Exercise 23 In the following, a “max” tree consists only of max nodes, whereas an“expectimax” tree consists of a max node at the root with alternatinglayers of chance and max nodes. At chance nodes, all outcomeprobabilities are nonzero. The goal is to find the value of theroot with a bounded-depth search.1. Assuming that leaf values are finite but unbounded, is pruning (as in alpha–beta) ever possible in a max tree? Give an example, or explain why not.2. Is pruning ever possible in an expectimax tree under the same conditions? Give an example, or explain why not.3. If leaf values are constrained to be in the range $[0,1]$, is pruning ever possible in a max tree? Give an example, or explain why not.4. If leaf values are constrained to be in the range $[0,1]$, is pruning ever possible in an expectimax tree? Give an example (qualitatively different from your example in (e), if any), or explain why not.5. If leaf values are constrained to be nonnegative, is pruning ever possible in a max tree? Give an example, or explain why not.6. If leaf values are constrained to be nonnegative, is pruning ever possible in an expectimax tree? Give an example, or explain why not.7. Consider the outcomes of a chance node in an expectimax tree. Which of the following evaluation orders is most likely to yield pruning opportunities: (i) Lowest probability first; (ii) Highest probability first; (iii) Doesn’t make any difference? Exercise 24 Suppose you have an oracle, $OM(s)$, that correctly predicts theopponent’s move in any state. Using this, formulate the definition of agame as a (single-agent) search problem. Describe an algorithm forfinding the optimal move. Exercise 25 Consider carefully the interplay of chance events and partialinformation in each of the games inExercise game-playing-chance-exercise.1. For which is the standard expectiminimax model appropriate? Implement the algorithm and run it in your game-playing agent, with appropriate modifications to the game-playing environment.2. For which would the scheme described in Exercise game-playing-monte-carlo-exercise be appropriate?3. Discuss how you might deal with the fact that in some of the games, the players do not have the same knowledge of the current state. Exercise 1 How many solutions are there for the map-coloring problem inFigure australia-figure? How many solutions if fourcolors are allowed? Two colors? Exercise 2 Consider the problem of placing $k$ knights on an $ntimes n$chessboard such that no two knights are attacking each other, where $k$is given and $kleq n^2$.1. Choose a CSP formulation. In your formulation, what are the variables?2. What are the possible values of each variable?3. What sets of variables are constrained, and how?4. Now consider the problem of putting *as many knights as possible* on the board without any attacks. Explain how to solve this with local search by defining appropriate ACTIONS and RESULT functions and a sensible objective function. Exercise 3 (crossword-exercise) Consider the problem of constructing (not solving)crossword puzzles fitting words into a rectangular grid. The grid,which is given as part of the problem, specifies which squares are blankand which are shaded. Assume that a list of words (i.e., a dictionary)is provided and that the task is to fill in the blank squares by usingany subset of the list. Formulate this problem precisely in two ways:1. As a general search problem. Choose an appropriate search algorithm and specify a heuristic function. Is it better to fill in blanks one letter at a time or one word at a time?2. As a constraint satisfaction problem. Should the variables be words or letters?Which formulation do you think will be better? Why? Exercise 4 (csp-definition-exercise) Give precise formulations for each of thefollowing as constraint satisfaction problems:1. Rectilinear floor-planning: find non-overlapping places in a large rectangle for a number of smaller rectangles.2. Class scheduling: There is a fixed number of professors and classrooms, a list of classes to be offered, and a list of possible time slots for classes. Each professor has a set of classes that he or she can teach.3. Hamiltonian tour: given a network of cities connected by roads, choose an order to visit all cities in a country without repeating any. Exercise 5 Solve the cryptarithmetic problem inFigure cryptarithmetic-figure by hand, using thestrategy of backtracking with forward checking and the MRV andleast-constraining-value heuristics. Exercise 6 (nary-csp-exercise) Show how a single ternary constraint such as“$A + B = C$” can be turned into three binary constraints by using anauxiliary variable. You may assume finite domains. (*Hint:*Consider a new variable that takes on values that are pairs of othervalues, and consider constraints such as “$X$ is the first element ofthe pair $Y$.”) Next, show how constraints with more than threevariables can be treated similarly. Finally, show how unary constraintscan be eliminated by altering the domains of variables. This completesthe demonstration that any CSP can be transformed into a CSP with onlybinary constraints. Exercise 7 (zebra-exercise) Consider the following logic puzzle: In five houses,each with a different color, live five persons of differentnationalities, each of whom prefers a different brand of candy, adifferent drink, and a different pet. Given the following facts, thequestions to answer are “Where does the zebra live, and in which housedo they drink water?”The Englishman lives in the red house.The Spaniard owns the dog.The Norwegian lives in the first house on the left.The green house is immediately to the right of the ivory house.The man who eats Hershey bars lives in the house next to the man withthe fox.Kit Kats are eaten in the yellow house.The Norwegian lives next to the blue house.The Smarties eater owns snails.The Snickers eater drinks orange juice.The Ukrainian drinks tea.The Japanese eats Milky Ways.Kit Kats are eaten in a house next to the house where the horse is kept.Coffee is drunk in the green house.Milk is drunk in the middle house.Discuss different representations of this problem as a CSP. Why wouldone prefer one representation over another? Exercise 8 Consider the graph with 8 nodes $A_1$, $A_2$, $A_3$, $A_4$, $H$, $T$,$F_1$, $F_2$. $A_i$ is connected to $A_{i+1}$ for all $i$, each $A_i$ isconnected to $H$, $H$ is connected to $T$, and $T$ is connected to each$F_i$. Find a 3-coloring of this graph by hand using the followingstrategy: backtracking with conflict-directed backjumping, the variableorder $A_1$, $H$, $A_4$, $F_1$, $A_2$, $F_2$, $A_3$, $T$, and the valueorder $R$, $G$, $B$. Exercise 9 Explain why it is a good heuristic to choose the variable that is*most* constrained but the value that is*least* constraining in a CSP search. Exercise 10 Generate random instances of map-coloring problems as follows: scatter$n$ points on the unit square; select a point $X$ at random, connect $X$by a straight line to the nearest point $Y$ such that $X$ is not alreadyconnected to $Y$ and the line crosses no other line; repeat the previousstep until no more connections are possible. The points representregions on the map and the lines connect neighbors. Now try to find$k$-colorings of each map, for both $k3$ and$k4$, using min-conflicts, backtracking, backtracking withforward checking, and backtracking with MAC. Construct a table ofaverage run times for each algorithm for values of $n$ up to the largestyou can manage. Comment on your results. Exercise 11 Use the AC-3 algorithm to show that arc consistency can detect theinconsistency of the partial assignment${green},V{red}$ for the problemshown in Figure australia-figure. Exercise 12 Use the AC-3 algorithm to show that arc consistency can detect theinconsistency of the partial assignment${red},V{blue}$ for the problemshown in Figure australia-figure. Exercise 13 What is the worst-case complexity of running AC-3 on a tree-structuredCSP? Exercise 14 (ac4-exercise) AC-3 puts back on the queue every arc($X_{k}, X_{i}$) whenever any value is deleted from thedomain of $X_{i}$, even if each value of $X_{k}$ is consistent withseveral remaining values of $X_{i}$. Suppose that, for every arc($X_{k}, X_{i}$), we keep track of the number of remaining values of$X_{i}$ that are consistent with each value of $X_{k}$. Explain how toupdate these numbers efficiently and hence show that arc consistency canbe enforced in total time $O(n^2d^2)$. Exercise 15 The Tree-CSP-Solver (Figure tree-csp-figure) makes arcs consistentstarting at the leaves and working backwards towards the root. Why doesit do that? What would happen if it went in the opposite direction? Exercise 16 We introduced Sudoku as a CSP to be solved by search over partialassignments because that is the way people generally undertake solvingSudoku problems. It is also possible, of course, to attack theseproblems with local search over complete assignments. How well would alocal solver using the min-conflicts heuristic do on Sudoku problems? Exercise 17 Define in your own words the terms constraint, backtracking search, arcconsistency, backjumping, min-conflicts, and cycle cutset. Exercise 18 Define in your own words the terms constraint, commutativity, arcconsistency, backjumping, min-conflicts, and cycle cutset. Exercise 19 Suppose that a graph is known to have a cycle cutset of no more than $k$nodes. Describe a simple algorithm for finding a minimal cycle cutsetwhose run time is not much more than $O(n^k)$ for a CSP with $n$variables. Search the literature for methods for finding approximatelyminimal cycle cutsets in time that is polynomial in the size of thecutset. Does the existence of such algorithms make the cycle cutsetmethod practical? Exercise 20 Consider the problem of tiling a surface (completely and exactlycovering it) with $n$ dominoes ($2times1$ rectangles). The surface is an arbitrary edge-connected (i.e.,adjacent along an edge, not just a corner) collection of $2n$$1times 1$ squares (e.g., a checkerboard, a checkerboard with somesquares missing, a $10times 1$ row of squares, etc.).1. Formulate this problem precisely as a CSP where the dominoes are the variables.2. Formulate this problem precisely as a CSP where the squares are the variables, keeping the state space as small as possible. (*Hint:* does it matter which particular domino goes on a given pair of squares?)3. Construct a surface consisting of 6 squares such that your CSP formulation from part (b) has a *tree-structured* constraint graph.4. Describe exactly the set of solvable instances that have a tree-structured constraint graph. Exercise 1 Suppose the agent has progressed to the point shown inFigure wumpus-seq35-figure(a), page wumpus-seq35-figure,having perceived nothing in [1,1], a breeze in [2,1], and a stenchin [1,2], and is now concerned with the contents of [1,3], [2,2],and [3,1]. Each of these can contain a pit, and at most one cancontain a wumpus. Following the example ofFigure wumpus-entailment-figure, construct the set ofpossible worlds. (You should find 32 of them.) Mark the worlds in whichthe KB is true and those in which each of the following sentences istrue:$alpha_2$ = “There is no pit in [2,2].”$alpha_3$ = “There is a wumpus in [1,3].”Hence show that ${KB} {models}alpha_2$ and${KB} {models}alpha_3$. Exercise 2 (Adapted from Barwise+Etchemendy:1993 .) Given the following, can you prove that the unicorn ismythical? How about magical? Horned?Note: If the unicorn is mythical, then it is immortal, but if it is not mythical, then it is a mortal mammal. If the unicorn is either immortal or a mammal, then it is horned. The unicorn is magical if it is horned. Exercise 3 (truth-value-exercise) Consider the problem of deciding whether apropositional logic sentence is true in a given model.1. Write a recursive algorithm PL-True?$ (s, m )$ that returns ${true}$ if and only if the sentence $s$ is true in the model $m$ (where $m$ assigns a truth value for every symbol in $s$). The algorithm should run in time linear in the size of the sentence. (Alternatively, use a version of this function from the online code repository.)2. Give three examples of sentences that can be determined to be true or false in a partial model that does not specify a truth value for some of the symbols.3. Show that the truth value (if any) of a sentence in a partial model cannot be determined efficiently in general.4. Modify your algorithm so that it can sometimes judge truth from partial models, while retaining its recursive structure and linear run time. Give three examples of sentences whose truth in a partial model is not detected by your algorithm.5. Investigate whether the modified algorithm makes $TT-Entails?$ more efficient. Exercise 4 Which of the following are correct?1. ${False} models {True}$.2. ${True} models {False}$.3. $(Aland B) models (A{;;{Leftrightarrow};;}B)$.4. $A{;;{Leftrightarrow};;}B models A lor B$.5. $A{;;{Leftrightarrow};;}B models lnot A lor B$.6. $(Aland B){:;{Rightarrow}:;}C models (A{:;{Rightarrow}:;}C)lor(B{:;{Rightarrow}:;}C)$.7. $(Clor (lnot A land lnot B)) equiv ((A{:;{Rightarrow}:;}C) land (B {:;{Rightarrow}:;}C))$.8. $(Alor B) land (lnot Clorlnot Dlor E) models (Alor B)$.9. $(Alor B) land (lnot Clorlnot Dlor E) models (Alor B) land (lnot Dlor E)$.10. $(Alor B) land lnot(A {:;{Rightarrow}:;}B)$ is satisfiable.11. $(A{;;{Leftrightarrow};;}B) land (lnot A lor B)$ is satisfiable.12. $(A{;;{Leftrightarrow};;}B) {;;{Leftrightarrow};;}C$ has the same number of models as $(A{;;{Leftrightarrow};;}B)$ for any fixed set of proposition symbols that includes $A$, $B$, $C$. Exercise 5 Which of the following are correct?1. ${False} models {True}$.2. ${True} models {False}$.3. $(Aland B) models (A{;;{Leftrightarrow};;}B)$.4. $A{;;{Leftrightarrow};;}B models A lor B$.5. $A{;;{Leftrightarrow};;}B models lnot A lor B$.6. $(Alor B) land (lnot Clorlnot Dlor E) models (Alor Blor C) land (Bland Cland D{:;{Rightarrow}:;}E)$.7. $(Alor B) land (lnot Clorlnot Dlor E) models (Alor B) land (lnot Dlor E)$.8. $(Alor B) land lnot(A {:;{Rightarrow}:;}B)$ is satisfiable.9. $(Aland B){:;{Rightarrow}:;}C models (A{:;{Rightarrow}:;}C)lor(B{:;{Rightarrow}:;}C)$.10. $(Clor (lnot A land lnot B)) equiv ((A{:;{Rightarrow}:;}C) land (B {:;{Rightarrow}:;}C))$.11. $(A{;;{Leftrightarrow};;}B) land (lnot A lor B)$ is satisfiable.12. $(A{;;{Leftrightarrow};;}B) {;;{Leftrightarrow};;}C$ has the same number of models as $(A{;;{Leftrightarrow};;}B)$ for any fixed set of proposition symbols that includes $A$, $B$, $C$. Exercise 6 (deduction-theorem-exercise) Prove each of the following assertions:1. $alpha$ is valid if and only if ${True}{models}alpha$.2. For any $alpha$, ${False}{models}alpha$.3. $alpha{models}beta$ if and only if the sentence $(alpha {:;{Rightarrow}:;}beta)$ is valid.4. $alpha equiv beta$ if and only if the sentence $(alpha{;;{Leftrightarrow};;}beta)$ is valid.5. $alpha{models}beta$ if and only if the sentence $(alpha land lnot beta)$ is unsatisfiable. Exercise 7 Prove, or find a counterexample to, each of the following assertions:1. If $alphamodelsgamma$ or $betamodelsgamma$ (or both) then $(alphaland beta)modelsgamma$2. If $(alphaland beta)modelsgamma$ then $alphamodelsgamma$ or $betamodelsgamma$ (or both).3. If $alphamodels (beta lor gamma)$ then $alpha models beta$ or $alpha models gamma$ (or both). Exercise 8 Prove, or find a counterexample to, each of the following assertions:1. If $alphamodelsgamma$ or $betamodelsgamma$ (or both) then $(alphaland beta)modelsgamma$2. If $alphamodels (beta land gamma)$ then $alpha models beta$ and $alpha models gamma$.3. If $alphamodels (beta lor gamma)$ then $alpha models beta$ or $alpha models gamma$ (or both). Exercise 9 Consider a vocabulary with only four propositions, $A$, $B$, $C$, and$D$. How many models are there for the following sentences?1. $Blor C$.2. $lnot Alor lnot B lor lnot C lor lnot D$.3. $(A{:;{Rightarrow}:;}B) land A land lnot B land C land D$. Exercise 10 We have defined four binary logical connectives.1. Are there any others that might be useful?2. How many binary connectives can there be?3. Why are some of them not very useful? Exercise 11 (logical-equivalence-exercise) Using a method of your choice, verifyeach of the equivalences inTable logical-equivalence-table (page logical-equivalence-table). Exercise 12 (propositional-validity-exercise) Decide whether each of the followingsentences is valid, unsatisfiable, or neither. Verify your decisionsusing truth tables or the equivalence rules ofTable logical-equivalence-table (page logical-equivalence-table).1. ${Smoke} {:;{Rightarrow}:;}{Smoke}$2. ${Smoke} {:;{Rightarrow}:;}{Fire}$3. $({Smoke} {:;{Rightarrow}:;}{Fire}) {:;{Rightarrow}:;}(lnot {Smoke} {:;{Rightarrow}:;}lnot {Fire})$4. ${Smoke} lor {Fire} lor lnot {Fire}$5. $(({Smoke} land {Heat}) {:;{Rightarrow}:;}{Fire}) {;;{Leftrightarrow};;}(({Smoke} {:;{Rightarrow}:;}{Fire}) lor ({Heat} {:;{Rightarrow}:;}{Fire}))$6. $({Smoke} {:;{Rightarrow}:;}{Fire}) {:;{Rightarrow}:;}(({Smoke} land {Heat}) {:;{Rightarrow}:;}{Fire}) $7. ${Big} lor {Dumb} lor ({Big} {:;{Rightarrow}:;}{Dumb})$ Exercise 13 (propositional-validity-exercise) Decide whether each of the followingsentences is valid, unsatisfiable, or neither. Verify your decisionsusing truth tables or the equivalence rules ofTable logical-equivalence-table (page logical-equivalence-table).1. ${Smoke} {:;{Rightarrow}:;}{Smoke}$2. ${Smoke} {:;{Rightarrow}:;}{Fire}$3. $({Smoke} {:;{Rightarrow}:;}{Fire}) {:;{Rightarrow}:;}(lnot {Smoke} {:;{Rightarrow}:;}lnot {Fire})$4. ${Smoke} lor {Fire} lor lnot {Fire}$5. $(({Smoke} land {Heat}) {:;{Rightarrow}:;}{Fire}) {;;{Leftrightarrow};;}(({Smoke} {:;{Rightarrow}:;}{Fire}) lor ({Heat} {:;{Rightarrow}:;}{Fire}))$6. ${Big} lor {Dumb} lor ({Big} {:;{Rightarrow}:;}{Dumb})$7. $({Big} land {Dumb}) lor lnot {Dumb}$ Exercise 14 (cnf-proof-exercise) Any propositional logic sentence is logicallyequivalent to the assertion that each possible world in which it wouldbe false is not the case. From this observation, prove that any sentencecan be written in CNF. Exercise 15 Use resolution to prove the sentence $lnot A land lnot B$ from theclauses in Exercise convert-clausal-exercise. Exercise 16 (inf-exercise) This exercise looks into the relationship betweenclauses and implication sentences.1. Show that the clause $(lnot P_1 lor cdots lor lnot P_m lor Q)$ is logically equivalent to the implication sentence $(P_1 land cdots land P_m) {;{Rightarrow};}Q$.2. Show that every clause (regardless of the number of positive literals) can be written in the form $(P_1 land cdots land P_m) {;{Rightarrow};}(Q_1 lor cdots lor Q_n)$, where the $P$s and $Q$s are proposition symbols. A knowledge base consisting of such sentences is in implicative normal form or Kowalski form Kowalski:1979.3. Write down the full resolution rule for sentences in implicative normal form. Exercise 17 According to some political pundits, a person who is radical ($R$) iselectable ($E$) if he/she is conservative ($C$), but otherwise is notelectable.1. Which of the following are correct representations of this assertion? 1. $(Rland E)iff C$ 2. $R{:;{Rightarrow}:;}(Eiff C)$ 3. $R{:;{Rightarrow}:;}((C{:;{Rightarrow}:;}E) lor lnot E)$2. Which of the sentences in (a) can be expressed in Horn form? Exercise 18 This question considers representing satisfiability (SAT) problems asCSPs.1. Draw the constraint graph corresponding to the SAT problem $$(lnot X_1 lor X_2) land (lnot X_2 lor X_3) land ldots land (lnot X_{n-1} lor X_n)$$ for the particular case $n5$.2. How many solutions are there for this general SAT problem as a function of $n$?3. Suppose we apply {Backtracking-Search} (page backtracking-search-algorithm) to find all solutions to a SAT CSP of the type given in (a). (To find all solutions to a CSP, we simply modify the basic algorithm so it continues searching after each solution is found.) Assume that variables are ordered $X_1,ldots,X_n$ and ${false}$ is ordered before ${true}$. How much time will the algorithm take to terminate? (Write an $O(cdot)$ expression as a function of $n$.)4. We know that SAT problems in Horn form can be solved in linear time by forward chaining (unit propagation). We also know that every tree-structured binary CSP with discrete, finite domains can be solved in time linear in the number of variables (Section csp-structure-section). Are these two facts connected? Discuss. Exercise 19 This question considers representing satisfiability (SAT) problems asCSPs.1. Draw the constraint graph corresponding to the SAT problem $$(lnot X_1 lor X_2) land (lnot X_2 lor X_3) land ldots land (lnot X_{n-1} lor X_n)$$ for the particular case $n4$.2. How many solutions are there for this general SAT problem as a function of $n$?3. Suppose we apply {Backtracking-Search} (page backtracking-search-algorithm) to find all solutions to a SAT CSP of the type given in (a). (To find all solutions to a CSP, we simply modify the basic algorithm so it continues searching after each solution is found.) Assume that variables are ordered $X_1,ldots,X_n$ and ${false}$ is ordered before ${true}$. How much time will the algorithm take to terminate? (Write an $O(cdot)$ expression as a function of $n$.)4. We know that SAT problems in Horn form can be solved in linear time by forward chaining (unit propagation). We also know that every tree-structured binary CSP with discrete, finite domains can be solved in time linear in the number of variables (Section csp-structure-section). Are these two facts connected? Discuss. Exercise 20 Explain why every nonempty propositional clause, by itself, issatisfiable. Prove rigorously that every set of five 3-SAT clauses issatisfiable, provided that each clause mentions exactly three distinctvariables. What is the smallest set of such clauses that isunsatisfiable? Construct such a set. Exercise 21 A propositional 2-CNF expression is a conjunction ofclauses, each containing exactly 2 literals, e.g.,$$(Alor B) land (lnot A lor C) land (lnot B lor D) land (lnot C lor G) land (lnot D lor G) .$$1. Prove using resolution that the above sentence entails $G$.2. Two clauses are semantically distinct if they are not logically equivalent. How many semantically distinct 2-CNF clauses can be constructed from $n$ proposition symbols?3. Using your answer to (b), prove that propositional resolution always terminates in time polynomial in $n$ given a 2-CNF sentence containing no more than $n$ distinct symbols.4. Explain why your argument in (c) does not apply to 3-CNF. Exercise 22 Prove each of the following assertions:1. Every pair of propositional clauses either has no resolvents, or all their resolvents are logically equivalent.2. There is no clause that, when resolved with itself, yields (after factoring) the clause $(lnot P lor lnot Q)$.3. If a propositional clause $C$ can be resolved with a copy of itself, it must be logically equivalent to $ True $. Exercise 23 Consider the following sentence:$$[ ({Food} {:;{Rightarrow}:;}{Party}) lor ({Drinks} {:;{Rightarrow}:;}{Party}) ] {:;{Rightarrow}:;}[ ( {Food} land {Drinks} ) {:;{Rightarrow}:;}{Party}] .$$1. Determine, using enumeration, whether this sentence is valid, satisfiable (but not valid), or unsatisfiable.2. Convert the left-hand and right-hand sides of the main implication into CNF, showing each step, and explain how the results confirm your answer to (a).3. Prove your answer to (a) using resolution. Exercise 24 (dnf-exercise) A sentence is in disjunctive normal form(DNF) if it is the disjunction ofconjunctions of literals. For example, the sentence$(A land B land lnot C) lor (lnot A land C) lor (B land lnot C)$is in DNF.1. Any propositional logic sentence is logically equivalent to the assertion that some possible world in which it would be true is in fact the case. From this observation, prove that any sentence can be written in DNF.2. Construct an algorithm that converts any sentence in propositional logic into DNF. (Hint: The algorithm is similar to the algorithm for conversion to CNF iven in Sectio pl-resolution-section.)3. Construct a simple algorithm that takes as input a sentence in DNF and returns a satisfying assignment if one exists, or reports that no satisfying assignment exists.4. Apply the algorithms in (b) and (c) to the following set of sentences: $A {Rightarrow} B$ $B {Rightarrow} C$ $C {Rightarrow} A$5. Since the algorithm in (b) is very similar to the algorithm for conversion to CNF, and since the algorithm in (c) is much simpler than any algorithm for solving a set of sentences in CNF, why is this technique not used in automated reasoning? Exercise 25 (convert-clausal-exercise) Convert the following set of sentences toclausal form.1. S1: $A {;;{Leftrightarrow};;}(B lor E)$.2. S2: $E {:;{Rightarrow}:;}D$.3. S3: $C land F {:;{Rightarrow}:;}lnot B$.4. S4: $E {:;{Rightarrow}:;}B$.5. S5: $B {:;{Rightarrow}:;}F$.6. S6: $B {:;{Rightarrow}:;}C$Give a trace of the execution of DPLL on the conjunction of theseclauses. Exercise 26 (convert-clausal-exercise) Convert the following set of sentences toclausal form.1. S1: $A {;;{Leftrightarrow};;}(B lor E)$.2. S2: $E {:;{Rightarrow}:;}D$.3. S3: $C land F {:;{Rightarrow}:;}lnot B$.4. S4: $E {:;{Rightarrow}:;}B$.5. S5: $B {:;{Rightarrow}:;}F$.6. S6: $B {:;{Rightarrow}:;}C$Give a trace of the execution of DPLL on the conjunction of theseclauses. Exercise 27 Is a randomly generated 4-CNF sentence with $n$ symbols and $m$ clausesmore or less likely to be solvable than a randomly generated 3-CNFsentence with $n$ symbols and $m$ clauses? Explain. Exercise 28 Minesweeper, the well-known computer game, isclosely related to the wumpus world. A minesweeper world isa rectangular grid of $N$ squares with $M$ invisible mines scatteredamong them. Any square may be probed by the agent; instant death followsif a mine is probed. Minesweeper indicates the presence of mines byrevealing, in each probed square, the number of minesthat are directly or diagonally adjacent. The goal is to probe everyunmined square.1. Let $X_{i,j}$ be true iff square $[i,j]$ contains a mine. Write down the assertion that exactly two mines are adjacent to [1,1] as a sentence involving some logical combination of $X_{i,j}$ propositions.2. Generalize your assertion from (a) by explaining how to construct a CNF sentence asserting that $k$ of $n$ neighbors contain mines.3. Explain precisely how an agent can use {DPLL} to prove that a given square does (or does not) contain a mine, ignoring the global constraint that there are exactly $M$ mines in all.4. Suppose that the global constraint is constructed from your method from part (b). How does the number of clauses depend on $M$ and $N$? Suggest a way to modify {DPLL} so that the global constraint does not need to be represented explicitly.5. Are any conclusions derived by the method in part (c) invalidated when the global constraint is taken into account?6. Give examples of configurations of probe values that induce long-range dependencies such that the contents of a given unprobed square would give information about the contents of a far-distant square. (Hint: consider an $Ntimes 1$ board.) Exercise 29 (known-literal-exercise) How long does it take to prove${KB}{models}alpha$ using {DPLL} when $alpha$ is a literal alreadycontained in ${KB}$? Explain. Exercise 30 (dpll-fc-exercise) Trace the behavior of {DPLL} on the knowledge base inFigure pl-horn-example-figure when trying to prove $Q$,and compare this behavior with that of the forward-chaining algorithm. Exercise 31 Write a successor-state axiom for the ${Locked}$ predicate, whichapplies to doors, assuming the only actions available are ${Lock}$ and${Unlock}$. Exercise 32 Discuss what is meant by optimal behavior in the wumpusworld. Show that the {Hybrid-Wumpus-Agent} is not optimal, and suggest ways to improve it. Exercise 33 Suppose an agent inhabits a world with two states, $S$ and $lnot S$,and can do exactly one of two actions, $a$ and $b$. Action $a$ doesnothing and action $b$ flips from one state to the other. Let $S^t$ bethe proposition that the agent is in state $S$ at time $t$, and let$a^t$ be the proposition that the agent does action $a$ at time $t$(similarly for $b^t$).1. Write a successor-state axiom for $S^{t+1}$.2. Convert the sentence in (a) into CNF.3. Show a resolution refutation proof that if the agent is in $lnot S$ at time $t$ and does $a$, it will still be in $lnot S$ at time $t+1$. Exercise 34 (ss-axiom-exercise) Section successor-state-sectionprovides some of the successor-state axioms required for the wumpusworld. Write down axioms for all remaining fluent symbols. Exercise 35 (hybrid-wumpus-exercise) Modify the {Hybrid-Wumpus-Agent} to use the 1-CNF logical stateestimation method described on page 1cnf-belief-state-page. We noted on that pagethat such an agent will not be able to acquire, maintain, and use morecomplex beliefs such as the disjunction $P_{3,1}lor P_{2,2}$. Suggest amethod for overcoming this problem by defining additional propositionsymbols, and try it out in the wumpus world. Does it improve theperformance of the agent? Exercise 1 A logical knowledge base represents the world using a set of sentenceswith no explicit structure. An analogicalrepresentation, on the other hand, has physical structure thatcorresponds directly to the structure of the thing represented. Considera road map of your country as an analogical representation of factsabout the country—it represents facts with a map language. Thetwo-dimensional structure of the map corresponds to the two-dimensionalsurface of the area.1. Give five examples of symbols in the map language.2. An explicit sentence is a sentence that the creator of the representation actually writes down. An implicit sentence is a sentence that results from explicit sentences because of properties of the analogical representation. Give three examples each of implicit and explicit sentences in the map language.3. Give three examples of facts about the physical structure of your country that cannot be represented in the map language.4. Give two examples of facts that are much easier to express in the map language than in first-order logic.5. Give two other examples of useful analogical representations. What are the advantages and disadvantages of each of these languages? Exercise 2 Consider a knowledge base containing just two sentences: $P(a)$ and$P(b)$. Does this knowledge base entail $forall,x P(x)$? Explain youranswer in terms of models. Exercise 3 Is the sentence ${exists,x,y;;} xy$ valid? Explain. Exercise 4 Write down a logical sentence such that every world in which it is truecontains exactly one object. Exercise 5 (two-friends-exercise) Write down a logical sentence such that every world in which it is truecontains exactly two objects. Exercise 6 (8puzzle-parity-exercise) Consider a symbol vocabulary that contains$c$ constant symbols, $p_k$ predicate symbols of each arity $k$, and$f_k$ function symbols of each arity $k$, where $1leq kleq A$. Let thedomain size be fixed at $D$. For any given model, each predicate orfunction symbol is mapped onto a relation or function, respectively, ofthe same arity. You may assume that the functions in the model allowsome input tuples to have no value for the function (i.e., the value isthe invisible object). Derive a formula for the number of possiblemodels for a domain with $D$ elements. Don’t worry about eliminatingredundant combinations. Exercise 7 (nqueens-size-exercise) Which of the following are valid (necessarily true) sentences?1. $(exists x xx) {:;{Rightarrow}:;}({forall,y;;} exists z yz)$. 2. ${forall,x;;} P(x) lor lnot P(x)$.3. ${forall,x;;} {Smart}(x) lor (xx)$. Exercise 8 (empty-universe-exercise) Consider a version of the semantics forfirst-order logic in which models with empty domains are allowed. Giveat least two examples of sentences that are valid according to thestandard semantics but not according to the new semantics. Discuss whichoutcome makes more intuitive sense for your examples. Exercise 9 (hillary-exercise) Does the fact$lnot {Spouse}({George},{Laura})$ follow from the facts${Jim}neq {George}$ and ${Spouse}({Jim},{Laura})$? If so,give a proof; if not, supply additional axioms as needed. What happensif we use ${Spouse}$ as a unary function symbol instead of a binarypredicate? Exercise 10 This exercise uses the function ${MapColor}$ and predicates${In}(x,y)$, ${Borders}(x,y)$, and ${Country}(x)$, whose argumentsare geographical regions, along with constant symbols for variousregions. In each of the following we give an English sentence and anumber of candidate logical expressions. For each of the logicalexpressions, state whether it (1) correctly expresses the Englishsentence; (2) is syntactically invalid and therefore meaningless; or (3)is syntactically valid but does not express the meaning of the Englishsentence.1. Paris and Marseilles are both in France. 1. ${In}({Paris} land {Marseilles}, {France})$. 2. ${In}({Paris},{France}) land {In}({Marseilles},{France})$. 3. ${In}({Paris},{France}) lor {In}({Marseilles},{France})$.2. There is a country that borders both Iraq and Pakistan. 1. ${exists,c;;}$ ${Country}(c) land {Border}(c,{Iraq}) land {Border}(c,{Pakistan})$. 2. ${exists,c;;}$ ${Country}(c) {:;{Rightarrow}:;}[{Border}(c,{Iraq}) land {Border}(c,{Pakistan})]$. 3. $[{exists,c;;}$ ${Country}(c)] {:;{Rightarrow}:;}[{Border}(c,{Iraq}) land {Border}(c,{Pakistan})]$. 4. ${exists,c;;}$ ${Border}({Country}(c),{Iraq} land {Pakistan})$.3. All countries that border Ecuador are in South America. 1. ${forall,c;;} Country(c) land {Border}(c,{Ecuador}) {:;{Rightarrow}:;}{In}(c,{SouthAmerica})$. 2. ${forall,c;;} {Country}(c) {:;{Rightarrow}:;}[{Border}(c,{Ecuador}) {:;{Rightarrow}:;}{In}(c,{SouthAmerica})]$. 3. ${forall,c;;} [{Country}(c) {:;{Rightarrow}:;}{Border}(c,{Ecuador})] {:;{Rightarrow}:;}{In}(c,{SouthAmerica})$. 4. ${forall,c;;} Country(c) land {Border}(c,{Ecuador}) land {In}(c,{SouthAmerica})$.4. No region in South America borders any region in Europe. 1. $lnot [{exists,c,d;;} {In}(c,{SouthAmerica}) land {In}(d,{Europe}) land {Borders}(c,d)]$. 2. ${forall,c,d;;} [{In}(c,{SouthAmerica}) land {In}(d,{Europe})] {:;{Rightarrow}:;}lnot {Borders}(c,d)]$. 3. $lnot {forall,c;;} {In}(c,{SouthAmerica}) {:;{Rightarrow}:;}{exists,d;;} {In}(d,{Europe}) land lnot {Borders}(c,d)$. 4. ${forall,c;;} {In}(c,{SouthAmerica}) {:;{Rightarrow}:;}{forall,d;;} {In}(d,{Europe}) {:;{Rightarrow}:;}lnot {Borders}(c,d)$.5. No two adjacent countries have the same map color. 1. ${forall,x,y;;} lnot {Country}(x) lor lnot {Country}(y) lor lnot {Borders}(x,y) lor {}$ $lnot ({MapColor}(x) = {MapColor}(y))$. 2. ${forall,x,y;;} ({Country}(x) land {Country}(y) land {Borders}(x,y) land lnot(x=y)) {:;{Rightarrow}:;}{}$ $lnot ({MapColor}(x) = {MapColor}(y))$. 3. ${forall,x,y;;} {Country}(x) land {Country}(y) land {Borders}(x,y) land {}$ $lnot ({MapColor}(x) = {MapColor}(y))$. 4. ${forall,x,y;;} ({Country}(x) land {Country}(y) land {Borders}(x,y) ) {:;{Rightarrow}:;}{MapColor}(xneq y)$. Exercise 11 Consider a vocabulary with the following symbols:&gt; ${Occupation}(p,o)$: Predicate. Person $p$ has occupation $o$.&gt; ${Customer}(p1,p2)$: Predicate. Person $p1$ is a customer of person $p2$.&gt; ${Boss}(p1,p2)$: Predicate. Person $p1$ is a boss of person $p2$.&gt; ${Doctor}$, $ {Surgeon}$, $ {Lawyer}$, $ {Actor}$: Constants denoting occupations.&gt; ${Emily}$, $ {Joe}$: Constants denoting people.Use these symbols to write the following assertions in first-orderlogic:1. Emily is either a surgeon or a lawyer.2. Joe is an actor, but he also holds another job.3. All surgeons are doctors.4. Joe does not have a lawyer (i.e., is not a customer of any lawyer).5. Emily has a boss who is a lawyer.6. There exists a lawyer all of whose customers are doctors.7. Every surgeon has a lawyer. Exercise 12 In each of the following we give an English sentence and a number ofcandidate logical expressions. For each of the logical expressions,state whether it (1) correctly expresses the English sentence; (2) issyntactically invalid and therefore meaningless; or (3) is syntacticallyvalid but does not express the meaning of the English sentence.1. Every cat loves its mother or father. 1. ${forall,x;;} {Cat}(x) {:;{Rightarrow}:;}{Loves}(x,{Mother}(x)lor {Father}(x))$. 2. ${forall,x;;} lnot {Cat}(x) lor {Loves}(x,{Mother}(x)) lor {Loves}(x,{Father}(x))$. 3. ${forall,x;;} {Cat}(x) land ({Loves}(x,{Mother}(x))lor {Loves}(x,{Father}(x)))$.2. Every dog who loves one of its brothers is happy. 1. ${forall,x;;} {Dog}(x) land (exists y {Brother}(y,x) land {Loves}(x,y)) {:;{Rightarrow}:;}{Happy}(x)$. 2. ${forall,x,y;;} {Dog}(x) land {Brother}(y,x) land {Loves}(x,y) {:;{Rightarrow}:;}{Happy}(x)$. 3. ${forall,x;;} {Dog}(x) land [{forall,y;;} {Brother}(y,x) {;;{Leftrightarrow};;}{Loves}(x,y)] {:;{Rightarrow}:;}{Happy}(x)$.3. No dog bites a child of its owner. 1. ${forall,x;;} {Dog}(x) {:;{Rightarrow}:;}lnot {Bites}(x,{Child}({Owner}(x)))$. 2. $lnot {exists,x,y;;} {Dog}(x) land {Child}(y,{Owner}(x)) land {Bites}(x,y)$. 3. ${forall,x;;} {Dog}(x) {:;{Rightarrow}:;}({forall,y;;} {Child}(y,{Owner}(x)) {:;{Rightarrow}:;}lnot {Bites}(x,y))$. 4. $lnot {exists,x;;} {Dog}(x) {:;{Rightarrow}:;}({exists,y;;} {Child}(y,{Owner}(x)) land {Bites}(x,y))$.4. Everyone’s zip code within a state has the same first digit. 1. ${forall,x,s,z_1;;} [{State}(s) land {LivesIn}(x,s) land {Zip}(x)z_1] {:;{Rightarrow}:;}{}$ $[{forall,y,z_2;;} {LivesIn}(y,s) land {Zip}(y)z_2 {:;{Rightarrow}:;}{Digit}(1,z_1) {Digit}(1,z_2) ]$. 2. ${forall,x,s;;} [{State}(s) land {LivesIn}(x,s) land {exists,z_1;;} {Zip}(x)z_1] {:;{Rightarrow}:;}{}$ $ [{forall,y,z_2;;} {LivesIn}(y,s) land {Zip}(y)z_2 land {Digit}(1,z_1) {Digit}(1,z_2) ]$. 3. ${forall,x,y,s;;} {State}(s) land {LivesIn}(x,s) land {LivesIn}(y,s) {:;{Rightarrow}:;}{Digit}(1,{Zip}(x){Zip}(y))$. 4. ${forall,x,y,s;;} {State}(s) land {LivesIn}(x,s) land {LivesIn}(y,s) {:;{Rightarrow}:;}{}$ ${Digit}(1,{Zip}(x)) {Digit}(1,{Zip}(y))$. Exercise 13 (language-determination-exercise) Complete the following exercisesabout logical sentences:1. Translate into *good, natural* English (no $x$s or $y$s!):$${forall,x,y,l;;} SpeaksLanguage(x, l) land SpeaksLanguage(y, l) implies Understands(x, y) land Understands(y,x).$$2. Explain why this sentence is entailed by the sentence$${forall,x,y,l;;} SpeaksLanguage(x, l) land SpeaksLanguage(y, l) implies Understands(x, y).$$3. Translate into first-order logic the following sentences: 1. Understanding leads to friendship. 2. Friendship is transitive. Remember to define all predicates, functions, and constants you use. Exercise 14 True or false? Explain.1. ${exists,x;;} x{Rumpelstiltskin}$ is a valid (necessarily true) sentence of first-order logic.2. Every existentially quantified sentence in first-order logic is true in any model that contains exactly one object.3. ${forall,x,y;;} xy$is satisfiable. Exercise 15 (Peano-completion-exercise) Rewrite the first two Peano axioms inSection Peano-section as a single axiom that defines${NatNum}(x)$ so as to exclude the possibility of natural numbersexcept for those generated by the successor function. Exercise 16 (wumpus-diagnostic-exercise) Equation (pit-biconditional-equation) onpage pit-biconditional-equation defines the conditions under which a square isbreezy. Here we consider two other ways to describe this aspect of thewumpus world.1. We can write [diagnostic rule] leading from observed effects to hidden causes. For finding pits, the obvious diagnostic rules say that if a square is breezy, some adjacent square must contain a pit; and if a square is not breezy, then no adjacent square contains a pit. Write these two rules in first-order logic and show that their conjunction is logically equivalent to Equation (pit-biconditional-equation).2. We can write [causal rule] leading from cause to effect. One obvious causal rule is that a pit causes all adjacent squares to be breezy. Write this rule in first-order logic, explain why it is incomplete compared to Equation (pit-biconditional-equation), and supply the missing axiom. Exercise 17 (kinship-exercise) Write axioms describing the predicates${Grandchild}$, ${Greatgrandparent}$, ${Ancestor}$, ${Brother}$,${Sister}$, ${Daughter}$, ${Son}$, ${FirstCousin}$,${BrotherInLaw}$, ${SisterInLaw}$, ${Aunt}$, and ${Uncle}$. Findout the proper definition of $m$th cousin $n$ times removed, and writethe definition in first-order logic. Now write down the basic factsdepicted in the family tree in Figure family1-figure.Using a suitable logical reasoning system, it all the sentences you havewritten down, and it who are Elizabeth’s grandchildren, Diana’sbrothers-in-law, Zara’s great-grandparents, and Eugenie’s ancestors. A typical family tree. The symbol $bowtie$ connects spouses and arrows point to children. Exercise 18 Write down a sentence asserting that + is a commutative function. Doesyour sentence follow from the Peano axioms? If so, explain why; if not,give a model in which the axioms are true and your sentence is false. Exercise 19 Explain what is wrong with the following proposed definition of the setmembership predicate $$ {forall,x,s;;} x in {x|s} $$ $$ {forall,x,s;;} x in s implies {forall,y;;} x in {y|s} $$ Exercise 20 (list-representation-exercise) Using the set axioms as examples, writeaxioms for the list domain, including all the constants, functions, andpredicates mentioned in the chapter. Exercise 21 (adjacency-exercise) Explain what is wrong with the following proposeddefinition of adjacent squares in the wumpus world:$${forall,x,y;;} {Adjacent}([x,y], [x+1, y]) land {Adjacent}([x,y], [x, y+1]) .$$ Exercise 22 Write out the axioms required for reasoning about the wumpus’s location,using a constant symbol ${Wumpus}$ and a binary predicate${At}({Wumpus}, {Location})$. Remember that there is only onewumpus. Exercise 23 Assuming predicates ${Parent}(p,q)$ and ${Female}(p)$ and constants${Joan}$ and ${Kevin}$, with the obvious meanings, express each ofthe following sentences in first-order logic. (You may use theabbreviation $exists^{1}$ to mean “there exists exactly one.”)1. Joan has a daughter (possibly more than one, and possibly sons as well).2. Joan has exactly one daughter (but may have sons as well).3. Joan has exactly one child, a daughter.4. Joan and Kevin have exactly one child together.5. Joan has at least one child with Kevin, and no children with anyone else. Exercise 24 Arithmetic assertions can be written in first-order logic with thepredicate symbol $&lt;$, the function symbols ${+}$ and ${times}$, and theconstant symbols 0 and 1. Additional predicates can also be defined withbiconditionals.1. Represent the property “$x$ is an even number.”2. Represent the property “$x$ is prime.”3. Goldbach’s conjecture is the conjecture (unproven as yet) that every even number is equal to the sum of two primes. Represent this conjecture as a logical sentence. Exercise 25 In Chapter csp-chapter, we used equality to indicatethe relation between a variable and its value. For instance, we wrote${WA}{red}$ to mean that Western Australia is coloredred. Representing this in first-order logic, we must write moreverbosely ${ColorOf}({WA}){red}$. What incorrectinference could be drawn if we wrote sentences such as${WA}{red}$ directly as logical assertions? Exercise 26 Write in first-order logic the assertion that every key and at least oneof every pair of socks will eventually be lost forever, using only thefollowing vocabulary: ${Key}(x)$, $x$ is a key; ${Sock}(x)$, $x$ isa sock; ${Pair}(x,y)$, $x$ and $y$ are a pair; ${Now}$, the currenttime; ${Before}(t_1,t_2)$, time $t_1$ comes before time $t_2$;${Lost}(x,t)$, object $x$ is lost at time $t$. Exercise 27 For each of the following sentences in English, decide if theaccompanying first-order logic sentence is a good translation. If not,explain why not and correct it. (Some sentences may have more than oneerror!)1. No two people have the same social security number. $$lnot {exists,x,y,n;;} {Person}(x) land {Person}(y) {:;{Rightarrow}:;}[{HasSS}#(x,n) land {HasSS}#(y,n)].$$2. John’s social security number is the same as Mary’s. $${exists,n;;} {HasSS}#({John},n) land {HasSS}#({Mary},n).$$3. Everyone’s social security number has nine digits. $${forall,x,n;;} {Person}(x) {:;{Rightarrow}:;}[{HasSS}#(x,n) land {Digits}(n,9)].$$4. Rewrite each of the above (uncorrected) sentences using a function symbol ${SS}#$ instead of the predicate ${HasSS}#$. Exercise 28 Translate into first-order logic the sentence “Everyone’s DNA is uniqueand is derived from their parents’ DNA.” You must specify the preciseintended meaning of your vocabulary terms. (*Hint*: Do notuse the predicate ${Unique}(x)$, since uniqueness is not really aproperty of an object in itself!) Exercise 29 For each of the following sentences in English, decide if theaccompanying first-order logic sentence is a good translation. If not,explain why not and correct it.1. Any apartment in London has lower rent than some apartments in Paris.$$forall {x} [{Apt}(x) land {In}(x,{London})]implies exists {y} ([{Apt}(y) land {In}(y,{Paris})] implies ({Rent}(x) &lt; {Rent}(y)))$$2. There is exactly one apartment in Paris with rent below $1000.$$exists {x} {Apt}(x) land {In}(x,{Paris}) land forall{y} [{Apt}(y) land {In}(y,{Paris}) land ({Rent}(y) &lt; {Dollars}(1000))] implies (y = x)$$3. If an apartment is more expensive than all apartments in London, it must be in Moscow.$$forall{x} {Apt}(x) land [forall{y} {Apt}(y) land {In}(y,{London}) land ({Rent}(x) &gt; {Rent}(y))] implies{In}(x,{Moscow}).$$ Exercise 30 Represent the following sentences in first-order logic, using aconsistent vocabulary (which you must define):1. Some students took French in spring 2001.2. Every student who takes French passes it.3. Only one student took Greek in spring 2001.4. The best score in Greek is always higher than the best score in French.5. Every person who buys a policy is smart.6. No person buys an expensive policy.7. There is an agent who sells policies only to people who are not insured.8. There is a barber who shaves all men in town who do not shave themselves.9. A person born in the UK, each of whose parents is a UK citizen or a UK resident, is a UK citizen by birth.10. A person born outside the UK, one of whose parents is a UK citizen by birth, is a UK citizen by descent.11. Politicians can fool some of the people all of the time, and they can fool all of the people some of the time, but they can’t fool all of the people all of the time.12. All Greeks speak the same language. (Use ${Speaks}(x,l)$ to mean that person $x$ speaks language $l$.) Exercise 31 Represent the following sentences in first-order logic, using aconsistent vocabulary (which you must define):1. Some students took French in spring 2001.2. Every student who takes French passes it.3. Only one student took Greek in spring 2001.4. The best score in Greek is always higher than the best score in French.5. Every person who buys a policy is smart.6. No person buys an expensive policy.7. There is an agent who sells policies only to people who are not insured.8. There is a barber who shaves all men in town who do not shave themselves.9. A person born in the UK, each of whose parents is a UK citizen or a UK resident, is a UK citizen by birth.10. A person born outside the UK, one of whose parents is a UK citizen by birth, is a UK citizen by descent.11. Politicians can fool some of the people all of the time, and they can fool all of the people some of the time, but they can’t fool all of the people all of the time.12. All Greeks speak the same language. (Use ${Speaks}(x,l)$ to mean that person $x$ speaks language $l$.) Exercise 32 Write a general set of facts and axioms to represent the assertion“Wellington heard about Napoleon’s death” and to correctly answer thequestion “Did Napoleon hear about Wellington’s death?” Exercise 33 (4bit-adder-exercise) Extend the vocabulary fromSection circuits-section to define addition for $n$-bitbinary numbers. Then encode the description of the four-bit adder inFigure 4bit-adder-figure, and pose the queries neededto verify that it is in fact correct. A four-bit adder. Each ${Ad}_i$ is a one-bit adder, as in figure adder-figure on page &lt;a href=""#"&gt;adder-figure&lt;/a&gt; Exercise 34 The circuit representation in the chapter is more detailed thannecessary if we care only about circuit functionality. A simplerformulation describes any $m$-input, $n$-output gate or circuit using apredicate with $m+n$ arguments, such that the predicate is true exactlywhen the inputs and outputs are consistent. For example, NOT gates aredescribed by the binary predicate ${NOT}(i,o)$, for which${NOT}(0,1)$ and ${NOT}(1,0)$ are known. Compositions of gates aredefined by conjunctions of gate predicates in which shared variablesindicate direct connections. For example, a NAND circuit can be composedfrom ${AND}$s and ${NOT}$s:$${forall,i_1,i_2,o_a,o;;} {AND}(i_1,i_2,o_a) land {NOT}(o_a,o) {:;{Rightarrow}:;}{NAND}(i_1,i_2,o) .$$Using this representation, define the one-bit adder inFigure adder-figure and the four-bit adder inFigure adder-figure, and explain what queries youwould use to verify the designs. What kinds of queries are*not* supported by this representation that*are* supported by the representation inSection circuits-section? Exercise 35 Obtain a passport application for your country, identify the rulesdetermining eligibility for a passport, and translate them intofirst-order logic, following the steps outlined inSection circuits-section Exercise 36 Consider a first-order logical knowledge base that describes worldscontaining people, songs, albums (e.g., “Meet the Beatles”) and disks(i.e., particular physical instances of CDs). The vocabulary containsthe following symbols:&gt; ${CopyOf}(d,a)$: Predicate. Disk $d$ is a copy of album $a$.&gt; ${Owns}(p,d)$: Predicate. Person $p$ owns disk $d$.&gt; ${Sings}(p,s,a)$: Album $a$ includes a recording of song $s$ sung by person $p$.&gt; ${Wrote}(p,s)$: Person $p$ wrote song $s$.&gt; ${McCartney}$, ${Gershwin}$, ${BHoliday}$, ${Joe}$, ${EleanorRigby}$, ${TheManILove}$, ${Revolver}$: Constants with the obvious meanings.Express the following statements in first-order logic:1. Gershwin wrote “The Man I Love.”2. Gershwin did not write “Eleanor Rigby.”3. Either Gershwin or McCartney wrote “The Man I Love.”4. Joe has written at least one song.5. Joe owns a copy of *Revolver*.6. Every song that McCartney sings on *Revolver* was written by McCartney.7. Gershwin did not write any of the songs on *Revolver*.8. Every song that Gershwin wrote has been recorded on some album. (Possibly different songs are recorded on different albums.)9. There is a single album that contains every song that Joe has written.10. Joe owns a copy of an album that has Billie Holiday singing “The Man I Love.”11. Joe owns a copy of every album that has a song sung by McCartney. (Of course, each different album is instantiated in a different physical CD.)12. Joe owns a copy of every album on which all the songs are sung by Billie Holiday. Exercise 1 Prove that Universal Instantiation is sound and that ExistentialInstantiation produces an inferentially equivalent knowledge base. Exercise 2 From ${Likes}({Jerry},{IceCream})$ it seems reasonable to infer${exists,x;;}{Likes}(x,{IceCream})$. Write down a general inference rule, , thatsanctions this inference. State carefully the conditions that must besatisfied by the variables and terms involved. Exercise 3 Suppose a knowledge base contains just one sentence,$exists,x {AsHighAs}(x,{Everest})$. Which of the following arelegitimate results of applying Existential Instantiation?1. ${AsHighAs}({Everest},{Everest})$.2. ${AsHighAs}({Kilimanjaro},{Everest})$.3. ${AsHighAs}({Kilimanjaro},{Everest}) land {AsHighAs}({BenNevis},{Everest})$ (after two applications). Exercise 4 For each pair of atomic sentences, give the most general unifier if itexists:1. $P(A,B,B)$, $P(x,y,z)$.2. $Q(y,G(A,B))$, $Q(G(x,x),y)$.3. ${Older}({Father}(y),y)$, ${Older}({Father}(x),{John})$.4. ${Knows}({Father}(y),y)$, ${Knows}(x,x)$. Exercise 5 For each pair of atomic sentences, give the most general unifier if itexists:1. $P(A,B,B)$, $P(x,y,z)$.2. $Q(y,G(A,B))$, $Q(G(x,x),y)$.3. ${Older}({Father}(y),y)$, ${Older}({Father}(x),{John})$.4. ${Knows}({Father}(y),y)$, ${Knows}(x,x)$. Exercise 6 (subsumption-lattice-exercise) Consider the subsumption lattices shownin Figure subsumption-lattice-figure(page subsumption-lattice-figure.1. Construct the lattice for the sentence ${Employs}({Mother}({John}),{Father}({Richard}))$.2. Construct the lattice for the sentence ${Employs}({IBM},y)$ (“Everyone works for IBM”). Remember to include every kind of query that unifies with the sentence.3. Assume that indexes each sentence under every node in its subsumption lattice. Explain how should work when some of these sentences contain variables; use as examples the sentences in (a) and (b) and the query ${Employs}(x,{Father}(x))$. Exercise 7 (fol-horses-exercise) Write down logical representations for thefollowing sentences, suitable for use with Generalized Modus Ponens:1. Horses, cows, and pigs are mammals.2. An offspring of a horse is a horse.3. Bluebeard is a horse.4. Bluebeard is Charlie’s parent.5. Offspring and parent are inverse relations.6. Every mammal has a parent. Exercise 8 These questions concern concern issues with substitution andSkolemization.1. Given the premise ${forall,x;;} {exists,y;;} P(x,y)$, it is not valid to conclude that ${exists,q;;} P(q,q)$. Give an example of a predicate $P$ where the first is true but the second is false.2. Suppose that an inference engine is incorrectly written with the occurs check omitted, so that it allows a literal like $P(x,F(x))$ to be unified with $P(q,q)$. (As mentioned, most standard implementations of Prolog actually do allow this.) Show that such an inference engine will allow the conclusion ${exists,y;;} P(q,q)$ to be inferred from the premise ${forall,x;;} {exists,y;;} P(x,y)$.3. Suppose that a procedure that converts first-order logic to clausal form incorrectly Skolemizes ${forall,x;;} {exists,y;;} P(x,y)$ to $P(x,Sk0)$—that is, it replaces $y$ by a Skolem constant rather than by a Skolem function of $x$. Show that an inference engine that uses such a procedure will likewise allow ${exists,q;;} P(q,q)$ to be inferred from the premise ${forall,x;;} {exists,y;;} P(x,y)$.4. A common error among students is to suppose that, in unification, one is allowed to substitute a term for a Skolem constant instead of for a variable. For instance, they will say that the formulas $P(Sk1)$ and $P(A)$ can be unified under the substitution ${ Sk1/A }$. Give an example where this leads to an invalid inference. Exercise 9 This question considers Horn KBs, such as the following:$$begin{array}{l}P(F(x)) {:;{Rightarrow}:;}P(x)Q(x) {:;{Rightarrow}:;}P(F(x))P(A)Q(B)end{array}$$ Let FC be a breadth-first forward-chaining algorithm thatrepeatedly adds all consequences of currently satisfied rules; let BC bea depth-first left-to-right backward-chaining algorithm that triesclauses in the order given in the KB. Which of the following are true?1. FC will infer the literal $Q(A)$.2. FC will infer the literal $P(B)$.3. If FC has failed to infer a given literal, then it is not entailed by the KB.4. BC will return ${true}$ given the query $P(B)$.5. If BC does not return ${true}$ given a query literal, then it is not entailed by the KB. Exercise 10 (csp-clause-exercise) Explain how to write any given 3-SAT problem ofarbitrary size using a single first-order definite clause and no morethan 30 ground facts. Exercise 11 Suppose you are given the following axioms: 1. $0 leq 3$. 2. $7 leq 9$. 3. ${forall,x;;} ; ; x leq x$. 4. ${forall,x;;} ; ; x leq x+0$. 5. ${forall,x;;} ; ; x+0 leq x$. 6. ${forall,x,y;;} ; ; x+y leq y+x$. 7. ${forall,w,x,y,z;;} ; ; w leq y$ $wedge$ $x leq z$ ${:;{Rightarrow}:;}$ $w+x leq y+z$. 8. ${forall,x,y,z;;} ; ; x leq y wedge y leq z : {:;{Rightarrow}:;}: x leq z$ 1. Give a backward-chaining proof of the sentence $7 leq 3+9$. (Be sure, of course, to use only the axioms given here, not anything else you may know about arithmetic.) Show only the steps that leads to success, not the irrelevant steps.2. Give a forward-chaining proof of the sentence $7 leq 3+9$. Again, show only the steps that lead to success. Exercise 12 Suppose you are given the following axioms:&gt; 1. $0 leq 4$.&gt; 2. $5 leq 9$.&gt; 3. ${forall,x;;} ; ; x leq x$.&gt; 4. ${forall,x;;} ; ; x leq x+0$.&gt; 5. ${forall,x;;} ; ; x+0 leq x$.&gt; 6. ${forall,x,y;;} ; ; x+y leq y+x$.&gt; 7. ${forall,w,x,y,z;;} ; ; w leq y$ $wedge$ $x leq z {:;{Rightarrow}:;}$ $w+x leq y+z$.&gt; 8. ${forall,x,y,z;;} ; ; x leq y wedge y leq z : {:;{Rightarrow}:;}: x leq z$1. Give a backward-chaining proof of the sentence $5 leq 4+9$. (Be sure, of course, to use only the axioms given here, not anything else you may know about arithmetic.) Show only the steps that leads to success, not the irrelevant steps.2. Give a forward-chaining proof of the sentence $5 leq 4+9$. Again, show only the steps that lead to success. Exercise 13 A popular children’s riddle is “Brothers and sisters have I none, butthat man’s father is my father’s son.” Use the rules of the familydomain (Section kinship-domain-section onpage kinship-domain-section to show who that man is. You may apply any of theinference methods described in this chapter. Why do you think that thisriddle is difficult? Exercise 14 Suppose we put into a logical knowledge base a segment of theU.S. census data listing the age, city of residence, date of birth, andmother of every person, using social security numbers as identifyingconstants for each person. Thus, George’s age is given by${Age}(443-65-1282, 56)$. Which of the followingindexing schemes S1–S5 enable an efficient solution for which of thequeries Q1–Q4 (assuming normal backward chaining)?- S1: an index for each atom in each position.- S2: an index for each first argument.- S3: an index for each predicate atom.- S4: an index for each combination of predicate and first argument.- S5: an index for each combination of predicate and second argument and an index for each first argument.- Q1: ${Age}(mbox 443-44-4321,x)$- Q2: ${ResidesIn}(x,{Houston})$- Q3: ${Mother}(x,y)$- Q4: ${Age}(x,{34}) land {ResidesIn}(x,{TinyTownUSA})$ Exercise 15 (standardize-failure-exercise) One might suppose that we can avoid theproblem of variable conflict in unification during backward chaining bystandardizing apart all of the sentences in the knowledge base once andfor all. Show that, for some sentences, this approach cannot work.(Hint: Consider a sentence in which one part unifies withanother.) Exercise 16 In this exercise, use the sentences you wrote inExercise fol-horses-exercise to answer a question byusing a backward-chaining algorithm.1. Draw the proof tree generated by an exhaustive backward-chaining algorithm for the query ${exists,h;;}{Horse}(h)$, where clauses are matched in the order given.2. What do you notice about this domain?3. How many solutions for $h$ actually follow from your sentences?4. Can you think of a way to find all of them? (Hint: See Smith+al:1986.) Exercise 17 (bc-trace-exercise) Trace the execution of the backward-chainingalgorithm in Figure backward-chaining-algorithm(page backward-chaining-algorithm when it is applied to solve the crime problem(page west-problem-page. Show the sequence of values taken on by the${goals}$ variable, and arrange them into a tree. Exercise 18 The following Prolog code defines a predicate P. (Rememberthat uppercase terms are variables, not constants, in Prolog.) P(X,[X|Y]). P(X,[Y|Z]) :- P(X,Z).1. Show proof trees and solutions for the queries P(A,[2,1,3]) and P(2,[1,A,3]).2. What standard list operation does P represent? Exercise 19 The following Prolog code defines a predicate P. (Rememberthat uppercase terms are variables, not constants, in Prolog.) P(X,[X|Y]). P(X,[Y|Z]) :- P(X,Z).1. Show proof trees and solutions for the queries P(A,[1,2,3]) and P(2,[1,A,3]).2. What standard list operation does P represent? Exercise 20 This exercise looks at sorting in Prolog.1. Write Prolog clauses that define the predicate sorted(L), which is true if and only if list L is sorted in ascending order.2. Write a Prolog definition for the predicate perm(L,M), which is true if and only if L is a permutation of M.3. Define sort(L,M) (M is a sorted version of L) using perm and sorted.4. Run sort on longer and longer lists until you lose patience. What is the time complexity of your program?5. Write a faster sorting algorithm, such as insertion sort or quicksort, in Prolog. Exercise 21 (diff-simplify-exercise) This exercise looks at the recursiveapplication of rewrite rules, using logic programming. A rewrite rule(or demodulator in terminology) is anequation with a specified direction. For example, the rewrite rule$x+0 rightarrow x$ suggests replacing any expression that matches $x+0$with the expression $x$. Rewrite rules are a key component of equationalreasoning systems. Use the predicate rewrite(X,Y) torepresent rewrite rules. For example, the earlier rewrite rule iswritten as rewrite(X+0,X). Some terms areprimitive and cannot be further simplified; thus, wewrite primitive(0) to say that 0 is a primitive term.1. Write a definition of a predicate simplify(X,Y), that is true when Y is a simplified version of X—that is, when no further rewrite rules apply to any subexpression of Y.2. Write a collection of rules for the simplification of expressions involving arithmetic operators, and apply your simplification algorithm to some sample expressions.3. Write a collection of rewrite rules for symbolic differentiation, and use them along with your simplification rules to differentiate and simplify expressions involving arithmetic expressions, including exponentiation. Exercise 22 This exercise considers the implementation of search algorithms inProlog. Suppose that successor(X,Y) is true when stateY is a successor of state X; and thatgoal(X) is true when X is a goal state. Writea definition for solve(X,P), which means thatP is a path (list of states) beginning with X,ending in a goal state, and consisting of a sequence of legal steps asdefined by successor. You will find that depth-first searchis the easiest way to do this. How easy would it be to add heuristicsearch control? Exercise 23 Suppose a knowledge base contains just the following first-order Hornclauses:$$Ancestor(Mother(x),x)$$$$Ancestor(x,y) land Ancestor(y,z) implies Ancestor(x,z)$$Consider a forward chaining algorithm that, on the $j$th iteration,terminates if the KB contains a sentence that unifies with the query,else adds to the KB every atomic sentence that can be inferred from thesentences already in the KB after iteration $j-1$.1. For each of the following queries, say whether the algorithm will (1) give an answer (if so, write down that answer); or (2) terminate with no answer; or (3) never terminate. 1. $Ancestor(Mother(y),John)$ 2. $Ancestor(Mother(Mother(y)),John)$ 3. $Ancestor(Mother(Mother(Mother(y))),Mother(y))$ 4. $Ancestor(Mother(John),Mother(Mother(John)))$2. Can a resolution algorithm prove the sentence $lnot Ancestor(John,John)$ from the original knowledge base? Explain how, or why not.3. Suppose we add the assertion that $lnot(Mother(x)x)$ and augment the resolution algorithm with inference rules for equality. Now what is the answer to (b)? Exercise 24 Let $cal L$ be the first-order language with a single predicate$S(p,q)$, meaning “$p$ shaves  $q$.” Assume a domain of people.1. Consider the sentence “There exists a person $P$ who shaves every one who does not shave themselves, and only people that do not shave themselves.” Express this in $cal L$.2. Convert the sentence in (a) to clausal form.3. Construct a resolution proof to show that the clauses in (b) are inherently inconsistent. (Note: you do not need any additional axioms.) Exercise 25 How can resolution be used to show that a sentence is valid?Unsatisfiable? Exercise 26 Construct an example of two clauses that can be resolved together in twodifferent ways giving two different outcomes. Exercise 27 From “Horses are animals,” it follows that “The head of a horse is thehead of an animal.” Demonstrate that this inference is valid by carryingout the following steps:1. Translate the premise and the conclusion into the language of first-order logic. Use three predicates: ${HeadOf}(h,x)$ (meaning “$h$ is the head of $x$”), ${Horse}(x)$, and ${Animal}(x)$.2. Negate the conclusion, and convert the premise and the negated conclusion into conjunctive normal form.3. Use resolution to show that the conclusion follows from the premise. Exercise 28 From “Sheep are animals,” it follows that “The head of a sheep is thehead of an animal.” Demonstrate that this inference is valid by carryingout the following steps:1. Translate the premise and the conclusion into the language of first-order logic. Use three predicates: ${HeadOf}(h,x)$ (meaning “$h$ is the head of $x$”), ${Sheep}(x)$, and ${Animal}(x)$.2. Negate the conclusion, and convert the premise and the negated conclusion into conjunctive normal form.3. Use resolution to show that the conclusion follows from the premise. Exercise 29 (quantifier-order-exercise) Here are two sentences in the language offirst-order logic:- (A) ${forall,x;;} {exists,y;;} ( x geq y )$- (B) ${exists,y;;} {forall,x;;} ( x geq y )$1. Assume that the variables range over all the natural numbers $0,1,2,ldots, infty$ and that the “$geq$” predicate means “is greater than or equal to.” Under this interpretation, translate (A) and (B) into English.2. Is (A) true under this interpretation?3. Is (B) true under this interpretation?4. Does (A) logically entail (B)?5. Does (B) logically entail (A)?6. Using resolution, try to prove that (A) follows from (B). Do this even if you think that (B) does not logically entail (A); continue until the proof breaks down and you cannot proceed (if it does break down). Show the unifying substitution for each resolution step. If the proof fails, explain exactly where, how, and why it breaks down.7. Now try to prove that (B) follows from (A). Exercise 30 Resolution can produce nonconstructive proofs for queries withvariables, so we had to introduce special mechanisms to extract definiteanswers. Explain why this issue does not arise with knowledge basescontaining only definite clauses. Exercise 31 We said in this chapter that resolution cannot be used to generate alllogical consequences of a set of sentences. Can any algorithm do this? Exercise 1 Consider a robot whose operation is described by the following PDDLoperators:$$Op({Go(x,y)},{At(Robot,x)},{lnot At(Robot,x) land At(Robot,y)})$$$$Op({Pick(o)},{At(Robot,x)land At(o,x)},{lnot At(o,x) land Holding(o)})$$$$Op({Drop(o)},{At(Robot,x)land Holding(o)},{At(o,x) land lnot Holding(o)}$$1. The operators allow the robot to hold more than one object. Show how to modify them with an $EmptyHand$ predicate for a robot that can hold only one object.2. Assuming that these are the only actions in the world, write a successor-state axiom for $EmptyHand$.Exercise 2 Describe the differences and similarities between problem solving andplanning.Exercise 3 Given the action schemas and initial statefrom Figure airport-pddl-algorithm, what are all theapplicable concrete instances of ${Fly}(p,{from},{to})$ in thestate described by$$At(P_1,JFK) land At(P_2,SFO) land Plane(P_1) land Plane(P_2) land Airport(JFK) land Airport(SFO)?$$Exercise 4 The monkey-and-bananas problem is faced by a monkey in a laboratory withsome bananas hanging out of reach from the ceiling. A box is availablethat will enable the monkey to reach the bananas if he climbs on it.Initially, the monkey is at $A$, the bananas at $B$, and the box at $C$.The monkey and box have height ${Low}$, but if the monkey climbs ontothe box he will have height ${High}$, the same as the bananas. Theactions available to the monkey include ${Go}$ from one place toanother, ${Push}$ an object from one place to another, ${ClimbUp}$onto or ${ClimbDown}$ from an object, and ${Grasp}$ or ${Ungrasp}$an object. The result of a ${Grasp}$ is that the monkey holds theobject if the monkey and object are in the same place at the sameheight.1. Write down the initial state description.2. Write the six action schemas.3. Suppose the monkey wants to fool the scientists, who are off to tea, by grabbing the bananas, but leaving the box in its original place. Write this as a general goal (i.e., not assuming that the box is necessarily at C) in the language of situation calculus. Can this goal be solved by a classical planning system?4. Your schema for pushing is probably incorrect, because if the object is too heavy, its position will remain the same when the ${Push}$ schema is applied. Fix your action schema to account for heavy objects.Exercise 5 The original {Strips} planner was designed to control Shakey the robot.Figure shakey-figure shows a version of Shakey’s worldconsisting of four rooms lined up along a corridor, where each room hasa door and a light switch. The actions in Shakey’s world include moving from place to place,pushing movable objects (such as boxes), climbing onto and down fromrigid objects (such as boxes), and turning light switches on and off.The robot itself could not climb on a box or toggle a switch, but theplanner was capable of finding and printing out plans that were beyondthe robot’s abilities. Shakey’s six actions are the following:- ${Go}(x,y,r)$, which requires that Shakey be ${At}$ $x$ and that $x$ and $y$ are locations ${In}$ the same room $r$. By convention a door between two rooms is in both of them.- Push a box $b$ from location $x$ to location $y$ within the same room: ${Push}(b,x,y,r)$. You will need the predicate ${Box}$ and constants for the boxes.- Climb onto a box from position $x$: ${ClimbUp}(x, b)$; climb down from a box to position $x$: ${ClimbDown}(b, x)$. We will need the predicate ${On}$ and the constant ${Floor}$.- Turn a light switch on or off: ${TurnOn}(s,b)$; ${TurnOff}(s,b)$. To turn a light on or off, Shakey must be on top of a box at the light switch’s location.Write PDDL sentences for Shakey’s six actions and the initial state fromConstruct a plan for Shakey toget ${Box}{}_2$ into ${Room}{}_2$. Shakey's world. Shakey can move between landmarks within a room, can pass through the door between rooms, can climb climbable objects and push pushable objects, and can flip light switches. Exercise 6 A finite Turing machine has a finite one-dimensional tape of cells, eachcell containing one of a finite number of symbols. One cell has a readand write head above it. There is a finite set of states the machine canbe in, one of which is the accept state. At each time step, depending onthe symbol on the cell under the head and the machine’s current state,there are a set of actions we can choose from. Each action involveswriting a symbol to the cell under the head, transitioning the machineto a state, and optionally moving the head left or right. The mappingthat determines which actions are allowed is the Turing machine’sprogram. Your goal is to control the machine into the accept state.Represent the Turing machine acceptance problem as a planning problem.If you can do this, it demonstrates that determining whether a planningproblem has a solution is at least as hard as the Turing acceptanceproblem, which is PSPACE-hard.Exercise 7 (negative-effects-exercise) Explain why dropping negative effects fromevery action schema results in a relaxed problem, provided thatpreconditions and goals contain only positive literals.Exercise 8 (sussman-anomaly-exercise) Figure sussman-anomaly-figure(page sussman-anomaly-figure) shows a blocks-world problem that is known as the {Sussman anomaly}.The problem was considered anomalous because the noninterleaved plannersof the early 1970s could not solve it. Write a definition of the problemand solve it, either by hand or with a planning program. Anoninterleaved planner is a planner that, when given two subgoals$G_{1}$ and $G_{2}$, produces either a plan for $G_{1}$ concatenatedwith a plan for $G_{2}$, or vice versa. Can a noninterleaved plannersolve this problem? How, or why not?Exercise 9 Prove that backward search with PDDL problems is complete.Exercise 10 Construct levels 0, 1, and 2 of the planning graph for the problem inFigure airport-pddl-algorithmExercise 11 (graphplan-proof-exercise) Prove the following assertions aboutplanning graphs:1. A literal that does not appear in the final level of the graph cannot be achieved.2. The level cost of a literal in a serial graph is no greater than the actual cost of an optimal plan for achieving it.Exercise 12 We saw that planning graphs can handle only propositional actions. Whatif we want to use planning graphs for a problem with variables in thegoal, such as ${At}(P_{1}, x) land {At}(P_{2}, x)$, where $x$ is assumed to be bound by anexistential quantifier that ranges over a finite domain of locations?How could you encode such a problem to work with planning graphs?Exercise 13 The set-level heuristic (see page set-level-page uses a planning graphto estimate the cost of achieving a conjunctive goal from the currentstate. What relaxed problem is the set-level heuristic the solution to?Exercise 14 Examine the definition of bidirectional search in Chapter search-chapter.1. Would bidirectional state-space search be a good idea for planning?2. What about bidirectional search in the space of partial-order plans?3. Devise a version of partial-order planning in which an action can be added to a plan if its preconditions can be achieved by the effects of actions already in the plan. Explain how to deal with conflicts and ordering constraints. Is the algorithm essentially identical to forward state-space search?Exercise 15 We contrasted forward and backward state-space searchers withpartial-order planners, saying that the latter is a plan-space searcher.Explain how forward and backward state-space search can also beconsidered plan-space searchers, and say what the plan refinementoperators are.Exercise 16 (satplan-preconditions-exercise) Up to now we have assumed that theplans we create always make sure that an action’s preconditions aresatisfied. Let us now investigate what propositional successor-stateaxioms such as ${HaveArrow}^{t+1} {;;{Leftrightarrow};;}{}$$({HaveArrow}^tland lnot {Shoot}^t)$ have to say about actions whose preconditionsare not satisfied.1. Show that the axioms predict that nothing will happen when an action is executed in a state where its preconditions are not satisfied.2. Consider a plan $p$ that contains the actions required to achieve a goal but also includes illegal actions. Is it the case that$$initial state land successor-state axioms landp {models} goal ?$$3. With first-order successor-state axioms in situation calculus, is it possible to prove that a plan containing illegal actions will achieve the goal?Exercise 17 (strips-translation-exercise) Consider how to translate a set of actionschemas into the successor-state axioms of situation calculus.1. Consider the schema for ${Fly}(p,{from},{to})$. Write a logical definition for the predicate ${Poss}({Fly}(p,{from},{to}),s)$, which is true if the preconditions for ${Fly}(p,{from},{to})$ are satisfied in situation $s$.2. Next, assuming that ${Fly}(p,{from},{to})$ is the only action schema available to the agent, write down a successor-state axiom for ${At}(p,x,s)$ that captures the same information as the action schema.3. Now suppose there is an additional method of travel: ${Teleport}(p,{from},{to})$. It has the additional precondition $lnot {Warped}(p)$ and the additional effect ${Warped}(p)$. Explain how the situation calculus knowledge base must be modified.4. Finally, develop a general and precisely specified procedure for carrying out the translation from a set of action schemas to a set of successor-state axioms.Exercise 18 (disjunctive-satplan-exercise) In the $SATPlan$ algorithm inFigure satplan-agent-algorithm (page satplan-agent-algorithm,each call to the satisfiability algorithm asserts a goal $g^T$, where$T$ ranges from 0 to $T_{max}$. Suppose instead that thesatisfiability algorithm is called only once, with the goal$g^0 vee g^1 vee cdots vee g^{T_{max}}$. 1. Will this always return a plan if one exists with length less than or equal to $T_{max}$? 2. Does this approach introduce any new spurious “solutions”?3. Discuss how one might modify a satisfiability algorithm such as $WalkSAT$ so that it finds short solutions (if they exist) when given a disjunctive goal of this form.Exercise 1 The goals we have considered so far all ask the planner to make theworld satisfy the goal at just one time step. Not all goals can beexpressed this way: you do not achieve the goal of suspending achandelier above the ground by throwing it in the air. More seriously,you wouldn’t want your spacecraft life-support system to supply oxygenone day but not the next. A maintenance goal is achievedwhen the agent’s plan causes a condition to hold continuously from agiven state onward. Describe how to extend the formalism of this chapterto support maintenance goals.Exercise 2 You have a number of trucks with which to deliver a set of packages.Each package starts at some location on a grid map, and has adestination somewhere else. Each truck is directly controlled by movingforward and turning. Construct a hierarchy of high-level actions forthis problem. What knowledge about the solution does your hierarchyencode?Exercise 3 (HLA-unique-exercise) Suppose that a high-level action has exactly oneimplementation as a sequence of primitive actions. Give an algorithm forcomputing its preconditions and effects, given the complete refinementhierarchy and schemas for the primitive actions.Exercise 4 Suppose that the optimistic reachable set of a high-level plan is asuperset of the goal set; can anything be concluded about whether theplan achieves the goal? What if the pessimistic reachable set doesn’tintersect the goal set? Explain.Exercise 5 (HLA-progression-exercise) Write an algorithm that takes an initialstate (specified by a set of propositional literals) and a sequence ofHLAs (each defined by preconditions and angelic specifications ofoptimistic and pessimistic reachable sets) and computes optimistic andpessimistic descriptions of the reachable set of the sequence.Exercise 6 In Figure jobshop-cpm-figure we showed how to describeactions in a scheduling problem by using separate fields for , , and .Now suppose we wanted to combine scheduling with nondeterministicplanning, which requires nondeterministic and conditional effects.Consider each of the three fields and explain if they should remainseparate fields, or if they should become effects of the action. Give anexample for each of the three.Exercise 7 Some of the operations in standard programming languages can be modeledas actions that change the state of the world. For example, theassignment operation changes the contents of a memory location, and theprint operation changes the state of the output stream. A programconsisting of these operations can also be considered as a plan, whosegoal is given by the specification of the program. Therefore, planningalgorithms can be used to construct programs that achieve a givenspecification. 1. Write an action schema for the assignment operator (assigning the value of one variable to another). Remember that the original value will be overwritten! 2. Show how object creation can be used by a planner to produce a plan for exchanging the values of two variables by using a temporary variable. Exercise 8 Consider the following argument: In a framework that allows uncertaininitial states, nondeterministic effectsare just a notational convenience, not a source of additionalrepresentational power. For any action schema $a$ with nondeterministiceffect $P lor Q$, we could always replace it with the conditionaleffects ${~R{:}~P} land{~lnot R{:}~Q}$, which in turn can bereduced to two regular actions. The proposition $R$ stands for a randomproposition that is unknown in the initial state and for which there areno sensing actions. Is this argument correct? Consider separately twocases, one in which only one instance of action schema $a$ is in theplan, the other in which more than one instance is.Exercise 9 (conformant-flip-literal-exercise) Suppose the ${Flip}$ actionalways changes the truth value of variable $L$. Show how to define itseffects by using an action schema with conditional effects. Show that,despite the use of conditional effects, a 1-CNF belief staterepresentation remains in 1-CNF after a ${Flip}$.Exercise 10 In the blocks world we were forced to introduce two action schemas,${Move}$ and ${MoveToTable}$, in order to maintain the ${Clear}$predicate properly. Show how conditional effects can be used torepresent both of these cases with a single action.Exercise 11 (alt-vacuum-exercise) Conditional effects were illustrated for the${Suck}$ action in the vacuum world—which square becomes clean dependson which square the robot is in. Can you think of a new set ofpropositional variables to define states of the vacuum world, such that${Suck}$ has an unconditional description? Write outthe descriptions of ${Suck}$, ${Left}$, and ${Right}$, using yourpropositions, and demonstrate that they suffice to describe all possiblestates of the world.Exercise 12 Find a suitably dirty carpet, free of obstacles, and vacuum it. Draw thepath taken by the vacuum cleaner as accurately as you can. Explain it,with reference to the forms of planning discussed in this chapter.Exercise 13 The following quotes are from the backs of shampoo bottles. Identifyeach as an unconditional, conditional, or execution-monitoring plan. (a)“Lather. Rinse. Repeat.” (b) “Apply shampoo to scalp and let it remainfor several minutes. Rinse and repeat if necessary.” (c) “See a doctorif problems persist.”Exercise 14 Consider the following problem: A patient arrives at the doctor’s officewith symptoms that could have been caused either by dehydration or bydisease $D$ (but not both). There are two possible actions: ${Drink}$,which unconditionally cures dehydration, and ${Medicate}$, which curesdisease $D$ but has an undesirable side effect if taken when the patientis dehydrated. Write the problem description, and diagram a sensorlessplan that solves the problem, enumerating all relevant possible worlds.Exercise 15 To the medication problem in the previous exercise, add a ${Test}$action that has the conditional effect ${CultureGrowth}$ when${Disease}$ is true and in any case has the perceptual effect${Known}({CultureGrowth})$. Diagram a conditional plan that solvesthe problem and minimizes the use of the ${Medicate}$ action.Exercise 1 Define an ontology in first-order logic for tic-tac-toe. The ontologyshould contain situations, actions, squares, players, marks (X, O, orblank), and the notion of winning, losing, or drawing a game. Alsodefine the notion of a forced win (or draw): a position from which aplayer can force a win (or draw) with the right sequence of actions.Write axioms for the domain. (Note: The axioms that enumerate thedifferent squares and that characterize the winning positions are ratherlong. You need not write these out in full, but indicate clearly whatthey look like.)Exercise 2 You are to create a system for advising computer science undergraduateson what courses to take over an extended period in order to satisfy theprogram requirements. (Use whatever requirements are appropriate foryour institution.) First, decide on a vocabulary for representing allthe information, and then represent it; then formulate a query to thesystem that will return a legal program of study as a solution. Youshould allow for some tailoring to individual students, in that yoursystem should ask what courses or equivalents the student has alreadytaken, and not generate programs that repeat those courses.Suggest ways in which your system could be improved—for example to takeinto account knowledge about student preferences, the workload, good andbad instructors, and so on. For each kind of knowledge, explain how itcould be expressed logically. Could your system easily incorporate thisinformation to find all feasible programs of study for a student? Couldit find the best program?Exercise 3 Figure ontology-figure shows the top levels of ahierarchy for everything. Extend it to include as many real categoriesas possible. A good way to do this is to cover all the things in youreveryday life. This includes objects and events. Start with waking up,and proceed in an orderly fashion noting everything that you see, touch,do, and think about. For example, a random sampling produces music,news, milk, walking, driving, gas, Soda Hall, carpet, talking, ProfessorFateman, chicken curry, tongue, $ 7, sun, the daily newspaper, and so on.You should produce both a single hierarchy chart (on a large sheet ofpaper) and a listing of objects and categories with the relationssatisfied by members of each category. Every object should be in acategory, and every category should be in the hierarchy.Exercise 4 (windows-exercise) Develop a representational system for reasoningabout windows in a window-based computer interface. In particular, yourrepresentation should be able to describe:- The state of a window: minimized, displayed, or nonexistent.- Which window (if any) is the active window.- The position of every window at a given time.- The order (front to back) of overlapping windows.- The actions of creating, destroying, resizing, and moving windows; changing the state of a window; and bringing a window to the front. Treat these actions as atomic; that is, do not deal with the issue of relating them to mouse actions. Give axioms describing the effects of actions on fluents. You may use either event or situation calculus.Assume an ontology containing situations,actions, integers (for $x$ and $y$coordinates) and windows. Define a language over thisontology; that is, a list of constants, function symbols, and predicateswith an English description of each. If you need to add more categoriesto the ontology (e.g., pixels), you may do so, but be sure to specifythese in your write-up. You may (and should) use symbols defined in thetext, but be sure to list these explicitly.Exercise 5 State the following in the language you developed for the previousexercise:1. In situation $S_0$, window $W_1$ is behind $W_2$ but sticks out on the top and bottom. Do not state exact coordinates for these; describe the general situation.2. If a window is displayed, then its top edge is higher than its bottom edge.3. After you create a window $w$, it is displayed.4. A window can be minimized only if it is displayed.Exercise 6 State the following in the language you developed for the previousexercise:1. In situation $S_0$, window $W_1$ is behind $W_2$ but sticks out on the top and bottom. Do not state exact coordinates for these; describe the general situation.2. If a window is displayed, then its top edge is higher than its bottom edge.3. After you create a window $w$, it is displayed.4. A window can be minimized only if it is displayed.Exercise 7 (Adapted from an example by Doug Lenat.) Your mission is to capture, inlogical form, enough knowledge to answer a series of questions about thefollowing simple scenario: Yesterday John went to the North Berkeley Safeway supermarket and bought two pounds of tomatoes and a pound of ground beef.Start by trying to represent the content of the sentence as a series ofassertions. You should write sentences that have straightforward logicalstructure (e.g., statements that objects have certain properties, thatobjects are related in certain ways, that all objects satisfying oneproperty satisfy another). The following might help you get started:- Which classes, objects, and relations would you need? What are their parents, siblings and so on? (You will need events and temporal ordering, among other things.)- Where would they fit in a more general hierarchy?- What are the constraints and interrelationships among them?- How detailed must you be about each of the various concepts?To answer the questions below, your knowledge base must includebackground knowledge. You’ll have to deal with what kind of things areat a supermarket, what is involved with purchasing the things oneselects, what the purchases will be used for, and so on. Try to makeyour representation as general as possible. To give a trivial example:don’t say “People buy food from Safeway,” because that won’t help youwith those who shop at another supermarket. Also, don’t turn thequestions into answers; for example, question (c) asks “Did John buy anymeat?”—not “Did John buy a pound of ground beef?”Sketch the chains of reasoning that would answer the questions. Ifpossible, use a logical reasoning system to demonstrate the sufficiencyof your knowledge base. Many of the things you write might be onlyapproximately correct in reality, but don’t worry too much; the idea isto extract the common sense that lets you answer these questions at all.A truly complete answer to this question is extremelydifficult, probably beyond the state of the art of current knowledgerepresentation. But you should be able to put together a consistent setof axioms for the limited questions posed here.1. Is John a child or an adult? [Adult]2. Does John now have at least two tomatoes? [Yes]3. Did John buy any meat? [Yes]4. If Mary was buying tomatoes at the same time as John, did he see her? [Yes]5. Are the tomatoes made in the supermarket? [No]6. What is John going to do with the tomatoes? [Eat them]7. Does Safeway sell deodorant? [Yes]8. Did John bring some money or a credit card to the supermarket? [Yes]9. Does John have less money after going to the supermarket? [Yes]Exercise 8 Make the necessary additions or changes to your knowledge base from theprevious exercise so that the questions that follow can be answered.Include in your report a discussion of your changes, explaining why theywere needed, whether they were minor or major, and what kinds ofquestions would necessitate further changes.1. Are there other people in Safeway while John is there? [Yes—staff!]2. Is John a vegetarian? [No]3. Who owns the deodorant in Safeway? [Safeway Corporation]4. Did John have an ounce of ground beef? [Yes]5. Does the Shell station next door have any gas? [Yes]6. Do the tomatoes fit in John’s car trunk? [Yes]Exercise 9 Represent the following seven sentences using and extending therepresentations developed in the chapter: 1. Water is a liquid between 0 and 100 degrees.2. Water boils at 100 degrees.3. The water in John’s water bottle is frozen.4. Perrier is a kind of water.5. John has Perrier in his water bottle.6. All liquids have a freezing point.7. A liter of water weighs more than a liter of alcohol.Exercise 10 (part-decomposition-exercise) Write definitions for the following:1. ${ExhaustivePartDecomposition}$2. ${PartPartition}$3. ${PartwiseDisjoint}$These should be analogous to the definitions for${ExhaustiveDecomposition}$, ${Partition}$, and ${Disjoint}$. Isit the case that ${PartPartition}(s,{BunchOf}(s))$? If so, prove it;if not, give a counterexample and define sufficient conditions underwhich it does hold.Exercise 11 (alt-measure-exercise) An alternative scheme for representing measuresinvolves applying the units function to an abstract length object. Insuch a scheme, one would write ${Inches}({Length}(L_1)) = {1.5}$.How does this scheme compare with the one in the chapter? Issues includeconversion axioms, names for abstract quantities (such as “50 dollars”),and comparisons of abstract measures in different units (50 inches ismore than 50 centimeters).Exercise 12 Write a set of sentences that allows one to calculate the price of anindividual tomato (or other object), given the price per pound. Extendthe theory to allow the price of a bag of tomatoes to be calculated.Exercise 13 (namematch-exercise) Add sentences to extend the definition of thepredicate ${Name}(s, c)$ so that a string such as “laptop computer”matches the appropriate category names from a variety of stores. Try tomake your definition general. Test it by looking at ten online stores,and at the category names they give for three different categories. Forexample, for the category of laptops, we found the names “Notebooks,”“Laptops,” “Notebook Computers,” “Notebook,” “Laptops and Notebooks,”and “Notebook PCs.” Some of these can be covered by explicit ${Name}$facts, while others could be covered by sentences for handling plurals,conjunctions, etc.Exercise 14 Write event calculus axioms to describe the actions in the wumpus world.Exercise 15 State the interval-algebra relation that holds between every pair of thefollowing real-world events:&gt; $LK$: The life of President Kennedy.&gt; $IK$: The infancy of President Kennedy.&gt; $PK$: The presidency of President Kennedy.&gt; $LJ$: The life of President Johnson.&gt; $PJ$: The presidency of President Johnson.&gt; $LO$: The life of President Obama.Exercise 16 This exercise concerns the problem of planning a route for a robot totake from one city to another. The basic action taken by the robot is${Go}(x,y)$, which takes it from city $x$ to city $y$ if there is aroute between those cities. ${Road}(x, y)$ is true if and only ifthere is a road connecting cities $x$ and $y$; if there is, then${Distance}(x, y)$ gives the length of the road. See the map onpage romania-distances-figure for an example. The robot begins in Arad and mustreach Bucharest.1. Write a suitable logical description of the initial situation of the robot.2. Write a suitable logical query whose solutions provide possible paths to the goal.3. Write a sentence describing the ${Go}$ action.4. Now suppose that the robot consumes fuel at the rate of .02 gallons per mile. The robot starts with 20 gallons of fuel. Augment your representation to include these considerations.5. Now suppose some of the cities have gas stations at which the robot can fill its tank. Extend your representation and write all the rules needed to describe gas stations, including the ${Fillup}$ action.Exercise 17 Investigate ways to extend the event calculus to handlesimultaneous events. Is it possible to avoid acombinatorial explosion of axioms?Exercise 18 (exchange-rates-exercise) Construct a representation for exchange ratesbetween currencies that allows for daily fluctuations.Exercise 19 (fixed-definition-exercise) Define the predicate ${Fixed}$, where${Fixed}({Location}(x))$ means that the location of object $x$ isfixed over time.Exercise 20 Describe the event of trading something for something else. Describebuying as a kind of trading in which one of the objects traded is a sumof money.Exercise 21 The two preceding exercises assume a fairly primitive notion ofownership. For example, the buyer starts by owning thedollar bills. This picture begins to break down when, for example, one’smoney is in the bank, because there is no longer any specific collectionof dollar bills that one owns. The picture is complicated still furtherby borrowing, leasing, renting, and bailment. Investigate the variouscommonsense and legal concepts of ownership, and propose a scheme bywhich they can be represented formally.Exercise 22 (card-on-forehead-exercise)(Adapted from Fagin+al:1995.) Consider a game playedwith a deck of just 8 cards, 4 aces and 4 kings. The three players,Alice, Bob, and Carlos, are dealt two cards each. Without looking atthem, they place the cards on their foreheads so that the other playerscan see them. Then the players take turns either announcing that theyknow what cards are on their own forehead, thereby winning the game, orsaying “I don’t know.” Everyone knows the players are truthful and areperfect at reasoning about beliefs.1. Game 1. Alice and Bob have both said “I don’t know.” Carlos sees that Alice has two aces (A-A) and Bob has two kings (K-K). What should Carlos say? (Hint: consider all three possible cases for Carlos: A-A, K-K, A-K.)2. Describe each step of Game 1 using the notation of modal logic.3. Game 2. Carlos, Alice, and Bob all said “I don’t know” on their first turn. Alice holds K-K and Bob holds A-K. What should Carlos say on his second turn?4. Game 3. Alice, Carlos, and Bob all say “I don’t know” on their first turn, as does Alice on her second turn. Alice and Bob both hold A-K. What should Carlos say?5. Prove that there will always be a winner to this game.Exercise 23 The assumption of logical omniscience, discussed onpage logical-omniscience, is of course not true of any actual reasoners.Rather, it is an idealization of the reasoning processthat may be more or less acceptable depending on the applications.Discuss the reasonableness of the assumption for each of the followingapplications of reasoning about knowledge:1. Partial knowledge adversary games, such as card games. Here one player wants to reason about what his opponent knows about the state of the game.2. Chess with a clock. Here the player may wish to reason about the limits of his opponent’s or his own ability to find the best move in the time available. For instance, if player A has much more time left than player B, then A will sometimes make a move that greatly complicates the situation, in the hopes of gaining an advantage because he has more time to work out the proper strategy.3. A shopping agent in an environment in which there are costs of gathering information.4. Reasoning about public key cryptography, which rests on the intractability of certain computational problems.Exercise 24 The assumption of logical omniscience, discussed onpage logical-omniscience, is of course not true of any actual reasoners.Rather, it is an idealization of the reasoning processthat may be more or less acceptable depending on the applications.Discuss the reasonableness of the assumption for each of the followingapplications of reasoning about knowledge:1. Partial knowledge adversary games, such as card games. Here one player wants to reason about what his opponent knows about the state of the game.2. Chess with a clock. Here the player may wish to reason about the limits of his opponent’s or his own ability to find the best move in the time available. For instance, if player A has much more time left than player B, then A will sometimes make a move that greatly complicates the situation, in the hopes of gaining an advantage because he has more time to work out the proper strategy.3. A shopping agent in an environment in which there are costs of gathering information.4. Reasoning about public key cryptography, which rests on the intractability of certain computational problems.Exercise 25 Translate the following description logic expression (frompage description-logic-ex) into first-order logic, and comment on the result:$$And(Man, AtLeast(3,Son), AtMost(2,Daughter), All(Son,And(Unemployed,Married, All(Spouse,Doctor ))), All(Daughter,And(Professor, Fills(Department ,Physics,Math))))$$Exercise 26 Recall that inheritance information in semantic networks can be capturedlogically by suitable implication sentences. This exercise investigatesthe efficiency of using such sentences for inheritance.1. Consider the information in a used-car catalog such as Kelly’s Blue Book—for example, that 1973 Dodge vans are (or perhaps were once) worth 575. Suppose all this information (for 11,000 models) is encoded as logical sentences, as suggested in the chapter. Write down three such sentences, including that for 1973 Dodge vans. How would you use the sentences to find the value of a particular car, given a backward-chaining theorem prover such as Prolog?2. Compare the time efficiency of the backward-chaining method for solving this problem with the inheritance method used in semantic nets.3. Explain how forward chaining allows a logic-based system to solve the same problem efficiently, assuming that the KB contains only the 11,000 sentences about prices.4. Describe a situation in which neither forward nor backward chaining on the sentences will allow the price query for an individual car to be handled efficiently.5. Can you suggest a solution enabling this type of query to be solved efficiently in all cases in logic systems? Hint: Remember that two cars of the same year and model have the same price.)Exercise 27 (natural-stupidity-exercise) One might suppose that the syntacticdistinction between unboxed links and singly boxed links in semanticnetworks is unnecessary, because singly boxed links are always attachedto categories; an inheritance algorithm could simply assume that anunboxed link attached to a category is intended to apply to all membersof that category. Show that this argument is fallacious, giving examplesof errors that would arise.Exercise 28 One part of the shopping process that was not covered in this chapter ischecking for compatibility between items. For example, if a digitalcamera is ordered, what accessory batteries, memory cards, and cases arecompatible with the camera? Write a knowledge base that can determinethe compatibility of a set of items and suggest replacements oradditional items if the shopper makes a choice that is not compatible.The knowledge base should works with at least one line of products andextend easily to other lines.Exercise 29 (shopping-grammar-exercise) A complete solution to the problem ofinexact matches to the buyer’s description in shopping is very difficultand requires a full array of natural language processing and informationretrieval techniques. (See Chapters nlp1-chapterand nlp-english-chapter.) One small step is to allow the user tospecify minimum and maximum values for various attributes. The buyermust use the following grammar for product descriptions:$$Description rightarrow Category space [Connector space Modifier]*$$$$Connector rightarrow "with" space | "and" | ","$$$$Modifier rightarrow Attribute space |space Attribute space Op space Value$$$$Op rightarrow "=" | "gt" | "lt"$$Here, ${Category}$ names a product category, ${Attribute}$ is somefeature such as “CPU” or “price,” and ${Value}$ is the target valuefor the attribute. So the query “computer with at least a 2.5 GHz CPUfor under 500” must be re-expressed as “computer with CPU $&gt;$ 2.5 GHzand price $&lt;$ 500.” Implement a shopping agent that accepts descriptionsin this language.Exercise 30 (buying-exercise) Our description of Internet shopping omitted theall-important step of actually buying the product.Provide a formal logical description of buying, using event calculus.That is, define the sequence of events that occurs when a buyer submitsa credit-card purchase and then eventually gets billed and receives theproduct.Exercise 1 Show from first principles that $P(abland a) = 1$.Exercise 2 (sum-to-1-exercise) Using the axioms of probability, prove that anyprobability distribution on a discrete random variable must sum to 1.Exercise 3 For each of the following statements, either prove it is true or give acounterexample.1. If $P(a b, c) = P(b a, c)$, then $P(a c) = P(b c)$ 2. If $P(a b, c) = P(a)$, then $P(b c) = P(b)$ 3. If $P(a b) = P(a)$, then $P(a b, c) = P(a c)$Exercise 4 Would it be rational for an agent to hold the three beliefs$P(A) = 0.4$, $P(B) = 0.3$, and$P(A lor B) = 0.5$? If so, what range of probabilities wouldbe rational for the agent to hold for $A land B$? Make up a table likethe one in Figure de-finetti-table, and show how itsupports your argument about rationality. Then draw another version ofthe table where $P(A lor B)= 0.7$. Explain why it is rational to have this probability,even though the table shows one case that is a loss and three that justbreak even. (Hint: what is Agent 1 committed to about theprobability of each of the four cases, especially the case that is aloss?)Exercise 5 (exclusive-exhaustive-exercise) This question deals with the propertiesof possible worlds, defined on page possible-worlds-page as assignments to allrandom variables. We will work with propositions that correspond toexactly one possible world because they pin down the assignments of allthe variables. In probability theory, such propositions are called atomic event. Forexample, with Boolean variables $X_1$, $X_2$, $X_3$, the proposition$x_1land lnot x_2 land lnot x_3$ fixes the assignment of thevariables; in the language of propositional logic, we would say it hasexactly one model.1. Prove, for the case of $n$ Boolean variables, that any two distinct atomic events are mutually exclusive; that is, their conjunction is equivalent to ${false}$.2. Prove that the disjunction of all possible atomic events is logically equivalent to ${true}$.3. Prove that any proposition is logically equivalent to the disjunction of the atomic events that entail its truth.Exercise 6 (inclusion-exclusion-exercise) ProveEquation (kolmogorov-disjunction-equation) fromEquations basic-probability-axiom-equationand (proposition-probability-equation.Exercise 7 Consider the set of all possible five-card poker hands dealt fairly froma standard deck of fifty-two cards.1. How many atomic events are there in the joint probability distribution (i.e., how many five-card hands are there)?2. What is the probability of each atomic event?3. What is the probability of being dealt a royal straight flush? Four of a kind?Exercise 8 Given the full joint distribution shown inFigure dentist-joint-table, calculate the following:1. $textbf{P}({toothache})$.2. $textbf{P}({Cavity})$.3. $textbf{P}({Toothache}{cavity})$.4. $textbf{P}({Cavity}{toothache}lor {catch})$.Exercise 9 Given the full joint distribution shown inFigure dentist-joint-table, calculate the following:1. $textbf{P}({toothache})$.2. $textbf{P}({Catch})$.3. $textbf{P}({Cavity}{catch})$.4. $textbf{P}({Cavity}{toothache}lor {catch})$.Exercise 10 (unfinished-game-exercise) In his letter of August 24, 1654, Pascalwas trying to show how a pot of money should be allocated when agambling game must end prematurely. Imagine a game where each turnconsists of the roll of a die, player E gets a point whenthe die is even, and player O gets a point when the dieis odd. The first player to get 7 points wins the pot. Suppose the gameis interrupted with E leading 4–2. How should the moneybe fairly split in this case? What is the general formula? (Fermat andPascal made several errors before solving the problem, but you should beable to get it right the first time.)Exercise 11 Deciding to put probability theory to good use, we encounter a slotmachine with three independent wheels, each producing one of the foursymbols bar, bell, lemon, orcherry with equal probability. The slot machine has thefollowing payout scheme for a bet of 1 coin (where “?” denotes that wedon’t care what comes up for that wheel): &gt; bar/bar/bar pays 20 coins&gt; bell/bell/bell pays 15 coins&gt; lemon/lemon/lemon pays 5 coins&gt; cherry/cherry/cherry pays 3 coins&gt; cherry/cherry/? pays 2 coins&gt; cherry/?/? pays 1 coin1. Compute the expected “payback” percentage of the machine. In other words, for each coin played, what is the expected coin return?2. Compute the probability that playing the slot machine once will result in a win.3. Estimate the mean and median number of plays you can expect to make until you go broke, if you start with 10 coins. You can run a simulation to estimate this, rather than trying to compute an exact answer.Exercise 12 Deciding to put probability theory to good use, we encounter a slotmachine with three independent wheels, each producing one of the foursymbols bar, bell, lemon, orcherry with equal probability. The slot machine has thefollowing payout scheme for a bet of 1 coin (where “?” denotes that wedon’t care what comes up for that wheel): &gt; bar/bar/bar pays 20 coins&gt; bell/bell/bell pays 15 coins&gt; lemon/lemon/lemon pays 5 coins&gt; cherry/cherry/cherry pays 3 coins&gt; cherry/cherry/? pays 2 coins&gt; cherry/?/? pays 1 coin1. Compute the expected “payback” percentage of the machine. In other words, for each coin played, what is the expected coin return?2. Compute the probability that playing the slot machine once will result in a win.3. Estimate the mean and median number of plays you can expect to make until you go broke, if you start with 10 coins. You can run a simulation to estimate this, rather than trying to compute an exact answer.Exercise 13 We wish to transmit an $n$-bit message to a receiving agent. The bits inthe message are independently corrupted (flipped) during transmissionwith $epsilon$ probability each. With an extra parity bit sent alongwith the original information, a message can be corrected by thereceiver if at most one bit in the entire message (including the paritybit) has been corrupted. Suppose we want to ensure that the correctmessage is received with probability at least $1-delta$. What is themaximum feasible value of $n$? Calculate this value for the case$epsilon = 0.001$, $delta = 0.01$.Exercise 14 We wish to transmit an $n$-bit message to a receiving agent. The bits inthe message are independently corrupted (flipped) during transmissionwith $epsilon$ probability each. With an extra parity bit sent alongwith the original information, a message can be corrected by thereceiver if at most one bit in the entire message (including the paritybit) has been corrupted. Suppose we want to ensure that the correctmessage is received with probability at least $1-delta$. What is themaximum feasible value of $n$? Calculate this value for the case$epsilon0.002$, $delta0.01$.Exercise 15 (independence-exercise) Show that the three forms of independence inEquation (independence-equation) are equivalent.Exercise 16 Consider two medical tests, A and B, for a virus. Test A is 95%effective at recognizing the virus when it is present, but has a 10%false positive rate (indicating that the virus is present, when it isnot). Test B is 90% effective at recognizing the virus, but has a 5%false positive rate. The two tests use independent methods ofidentifying the virus. The virus is carried by 1% of all people. Saythat a person is tested for the virus using only one of the tests, andthat test comes back positive for carrying the virus. Which testreturning positive is more indicative of someone really carrying thevirus? Justify your answer mathematically.Exercise 17 Suppose you are given a coin that lands ${heads}$ with probability $x$and ${tails}$ with probability $1 - x$. Are the outcomes of successiveflips of the coin independent of each other given that you know thevalue of $x$? Are the outcomes of successive flips of the coinindependent of each other if you do not know the value of$x$? Justify your answer.Exercise 18 After your yearly checkup, the doctor has bad news and good news. Thebad news is that you tested positive for a serious disease and that thetest is 99% accurate (i.e., the probability of testing positive when youdo have the disease is 0.99, as is the probability of testing negativewhen you don’t have the disease). The good news is that this is a raredisease, striking only 1 in 10,000 people of your age. Why is it goodnews that the disease is rare? What are the chances that you actuallyhave the disease?Exercise 19 After your yearly checkup, the doctor has bad news and good news. Thebad news is that you tested positive for a serious disease and that thetest is 99% accurate (i.e., the probability of testing positive when youdo have the disease is 0.99, as is the probability of testing negativewhen you don’t have the disease). The good news is that this is a raredisease, striking only 1 in 100,000 people of your age. Why is it goodnews that the disease is rare? What are the chances that you actuallyhave the disease?Exercise 20 (conditional-bayes-exercise) It is quite often useful to consider theeffect of some specific propositions in the context of some generalbackground evidence that remains fixed, rather than in the completeabsence of information. The following questions ask you to prove moregeneral versions of the product rule and Bayes’ rule, with respect tosome background evidence $textbf{e}$: 1. Prove the conditionalized version of the general product rule: $${textbf{P}}(X,Y textbf{e}) = {textbf{P}}(XY,textbf{e}) {textbf{P}}(Ytextbf{e}) .$$ 2. Prove the conditionalized version of Bayes’ rule in Equation (conditional-bayes-equation). Exercise 21 (pv-xyz-exercise) Show that the statement of conditional independence$${textbf{P}}(X,Y | Z) = {textbf{P}}(X | Z) {textbf{P}}(Y | Z)$$is equivalent to each of the statements$${textbf{P}}(X | Y,Z) = {textbf{P}}(X | Z) quadmbox{and}quad {textbf{P}}(Y | X,Z) = {textbf{P}}(Y | Z) .$$Exercise 22 Suppose you are given a bag containing $n$ unbiased coins. You are toldthat $n-1$ of these coins are normal, with heads on one side and tailson the other, whereas one coin is a fake, with heads on both sides. 1. Suppose you reach into the bag, pick out a coin at random, flip it, and get a head. What is the (conditional) probability that the coin you chose is the fake coin? 2. Suppose you continue flipping the coin for a total of $k$ times after picking it and see $k$ heads. Now what is the conditional probability that you picked the fake coin? 3. Suppose you wanted to decide whether the chosen coin was fake by flipping it $k$ times. The decision procedure returns ${fake}$ if all $k$ flips come up heads; otherwise it returns ${normal}$. What is the (unconditional) probability that this procedure makes an error?Exercise 23 (normalization-exercise) In this exercise, you will complete thenormalization calculation for the meningitis example. First, make up asuitable value for $P(slnot m)$, and use it to calculateunnormalized values for $P(ms)$ and $P(lnot m s)$(i.e., ignoring the $P(s)$ term in the Bayes’ rule expression,Equation (meningitis-bayes-equation). Now normalizethese values so that they add to 1.Exercise 24 This exercise investigates the way in which conditional independencerelationships affect the amount of information needed for probabilisticcalculations.1. Suppose we wish to calculate $P(he_1,e_2)$ and we have no conditional independence information. Which of the following sets of numbers are sufficient for the calculation? 1. ${textbf{P}}(E_1,E_2)$, ${textbf{P}}(H)$, ${textbf{P}}(E_1H)$, ${textbf{P}}(E_2H)$ 2. ${textbf{P}}(E_1,E_2)$, ${textbf{P}}(H)$, ${textbf{P}}(E_1,E_2H)$ 3. ${textbf{P}}(H)$, ${textbf{P}}(E_1H)$, ${textbf{P}}(E_2H)$2. Suppose we know that ${textbf{P}}(E_1H,E_2)={textbf{P}}(E_1H)$ for all values of $H$, $E_1$, $E_2$. Now which of the three sets are sufficient?Exercise 25 Let $X$, $Y$, $Z$ be Boolean random variables. Label the eight entriesin the joint distribution ${textbf{P}}(X,Y,Z)$ as $a$ through$h$. Express the statement that $X$ and $Y$ are conditionallyindependent given $Z$, as a set of equations relating $a$ through $h$.How many nonredundantequations are there?Exercise 26 (Adapted from Pearl [Pearl:1988].) Suppose you are a witness to anighttime hit-and-run accident involving a taxi in Athens. All taxis inAthens are blue or green. You swear, under oath, that the taxi was blue.Extensive testing shows that, under the dim lighting conditions,discrimination between blue and green is 75% reliable. 1. Is it possible to calculate the most likely color for the taxi? (*Hint:* distinguish carefully between the proposition that the taxi *is* blue and the proposition that it *appears* blue.) 2. What if you know that 9 out of 10 Athenian taxis are green?Exercise 27 Write out a general algorithm for answering queries of the form${textbf{P}}({Cause}textbf{e})$, using a naive Bayesdistribution. Assume that the evidence $textbf{e}$ may assign values toany subset of the effect variables.Exercise 28 (naive-bayes-retrieval-exercise) Text categorization is the task ofassigning a given document to one of a fixed set of categories on thebasis of the text it contains. Naive Bayes models are often used forthis task. In these models, the query variable is the document category,and the “effect” variables are the presence or absence of each word inthe language; the assumption is that words occur independently indocuments, with frequencies determined by the document category.1. Explain precisely how such a model can be constructed, given as “training data” a set of documents that have been assigned to categories.2. Explain precisely how to categorize a new document.3. Is the conditional independence assumption reasonable? Discuss.Exercise 29 In our analysis of the wumpus world, we used the fact thateach square contains a pit with probability 0.2, independently of thecontents of the other squares. Suppose instead that exactly $N/5$ pitsare scattered at random among the $N$ squares other than [1,1]. Arethe variables $P_{i,j}$ and $P_{k,l}$ still independent? What is thejoint distribution ${textbf{P}}(P_{1,1},ldots,P_{4,4})$ now?Redo the calculation for the probabilities of pits in [1,3] and[2,2].Exercise 30 Redo the probability calculation for pits in [1,3] and [2,2],assuming that each square contains a pit with probability 0.01,independent of the other squares. What can you say about the relativeperformance of a logical versus a probabilistic agent in this case?Exercise 31 Implement a hybrid probabilistic agent for the wumpus world, based onthe hybrid agent inFigure hybrid-wumpus-agent-algorithm and theprobabilistic inference procedure outlined in this chapter.Exercise 1 We have a bag of three biased coins $a$, $b$, and $c$ with probabilitiesof coming up heads of 20%, 60%, and 80%, respectively. One coin is drawnrandomly from the bag (with equal likelihood of drawing each of thethree coins), and then the coin is flipped three times to generate theoutcomes $X_1$, $X_2$, and $X_3$.1. Draw the Bayesian network corresponding to this setup and define the necessary CPTs.2. Calculate which coin was most likely to have been drawn from the bag if the observed flips come out heads twice and tails once.Exercise 2 We have a bag of three biased coins $a$, $b$, and $c$ with probabilitiesof coming up heads of 30%, 60%, and 75%, respectively. One coin is drawnrandomly from the bag (with equal likelihood of drawing each of thethree coins), and then the coin is flipped three times to generate theoutcomes $X_1$, $X_2$, and $X_3$.1. Draw the Bayesian network corresponding to this setup and define the necessary CPTs.2. Calculate which coin was most likely to have been drawn from the bag if the observed flips come out heads twice and tails once.Exercise 3(cpt-equivalence-exercise) Equation (parameter-joint-repn-equation onpage parameter-joint-repn-equation defines the joint distribution represented by aBayesian network in terms of the parameters$theta(X_i{Parents}(X_i))$. This exercise asks you to derivethe equivalence between the parameters and the conditional probabilities${textbf{ P}}(X_i{Parents}(X_i))$ from this definition.1. Consider a simple network $Xrightarrow Yrightarrow Z$ with three Boolean variables. Use Equations (conditional-probability-equation and (marginalization-equation (pages conditional-probability-equation and marginalization-equation) to express the conditional probability $P(zy)$ as the ratio of two sums, each over entries in the joint distribution ${textbf{P}}(X,Y,Z)$.2. Now use Equation (parameter-joint-repn-equation to write this expression in terms of the network parameters $theta(X)$, $theta(YX)$, and $theta(ZY)$.3. Next, expand out the summations in your expression from part (b), writing out explicitly the terms for the true and false values of each summed variable. Assuming that all network parameters satisfy the constraint $sum_{x_i} theta(x_i{parents}(X_i))1$, show that the resulting expression reduces to $theta(zy)$.4. Generalize this derivation to show that $theta(X_i{Parents}(X_i)) = {textbf{P}}(X_i{Parents}(X_i))$ for any Bayesian network.Exercise 4 The arc reversal operation of in a Bayesian network allows us to change the directionof an arc $Xrightarrow Y$ while preserving the joint probabilitydistribution that the network represents Shachter:1986. Arc reversalmay require introducing new arcs: all the parents of $X$ also becomeparents of $Y$, and all parents of $Y$ also become parents of $X$.1. Assume that $X$ and $Y$ start with $m$ and $n$ parents, respectively, and that all variables have $k$ values. By calculating the change in size for the CPTs of $X$ and $Y$, show that the total number of parameters in the network cannot decrease during arc reversal. (Hint: the parents of $X$ and $Y$ need not be disjoint.)2. Under what circumstances can the total number remain constant?3. Let the parents of $X$ be $textbf{U} cup textbf{V}$ and the parents of $Y$ be $textbf{V} cup textbf{W}$, where $textbf{U}$ and $textbf{W}$ are disjoint. The formulas for the new CPTs after arc reversal are as follows: $$begin{aligned} {textbf{P}}(Y | textbf{U},textbf{V},textbf{W}) &amp;=&amp; sum_x {textbf{P}}(Y | textbf{V},textbf{W}, x) {textbf{P}}(x | textbf{U}, textbf{V}) {textbf{P}}(X | textbf{U},textbf{V},textbf{W}, Y) &amp;=&amp; {textbf{P}}(Y | X, textbf{V}, textbf{W}) {textbf{P}}(X | textbf{U}, textbf{V}) / {textbf{P}}(Y | textbf{U},textbf{V},textbf{W}) .end{aligned}$$ Prove that the new network expresses the same joint distribution over all variables as the original network.Exercise 5 Consider the Bayesian network inFigure burglary-figure.1. If no evidence is observed, are ${Burglary}$ and ${Earthquake}$ independent? Prove this from the numerical semantics and from the topological semantics.2. If we observe ${Alarm}{true}$, are ${Burglary}$ and ${Earthquake}$ independent? Justify your answer by calculating whether the probabilities involved satisfy the definition of conditional independence.Exercise 6 Suppose that in a Bayesian network containing an unobserved variable$Y$, all the variables in the Markov blanket ${MB}(Y)$ have beenobserved.1. Prove that removing the node $Y$ from the network will not affect the posterior distribution for any other unobserved variable in the network.2. Discuss whether we can remove $Y$ if we are planning to use (i) rejection sampling and (ii) likelihood weighting. Three possible structures for a Bayesian network describing genetic inheritance of handedness. Exercise 7 (handedness-exercise) Let $H_x$ be a random variable denoting thehandedness of an individual $x$, with possible values $l$ or $r$. Acommon hypothesis is that left- or right-handedness is inherited by asimple mechanism; that is, perhaps there is a gene $G_x$, also withvalues $l$ or $r$, and perhaps actual handedness turns out mostly thesame (with some probability $s$) as the gene an individual possesses.Furthermore, perhaps the gene itself is equally likely to be inheritedfrom either of an individual’s parents, with a small nonzero probability$m$ of a random mutation flipping the handedness.1. Which of the three networks in Figure handedness-figure claim that $ {textbf{P}}(G_{father},G_{mother},G_{child}) = {textbf{P}}(G_{father}){textbf{P}}(G_{mother}){textbf{P}}(G_{child})$?2. Which of the three networks make independence claims that are consistent with the hypothesis about the inheritance of handedness?3. Which of the three networks is the best description of the hypothesis?4. Write down the CPT for the $G_{child}$ node in network (a), in terms of $s$ and $m$.5. Suppose that $P(G_{father}l)=P(G_{mother}l)=q$. In network (a), derive an expression for $P(G_{child}l)$ in terms of $m$ and $q$ only, by conditioning on its parent nodes.6. Under conditions of genetic equilibrium, we expect the distribution of genes to be the same across generations. Use this to calculate the value of $q$, and, given what you know about handedness in humans, explain why the hypothesis described at the beginning of this question must be wrong.Exercise 8 (markov-blanket-exercise) The Markovblanket of a variable is defined on page markov-blanket-page.Prove that a variable is independent of all other variables in thenetwork, given its Markov blanket and deriveEquation (markov-blanket-equation)(page markov-blanket-equation). A Bayesian network describing some features of a car's electrical system and engine. Each variable is Boolean, and the true value indicates that the corresponding aspect of the vehicle is in working order.Exercise 9 Consider the network for car diagnosis shown inFigure car-starts-figure.1. Extend the network with the Boolean variables ${IcyWeather}$ and ${StarterMotor}$.2. Give reasonable conditional probability tables for all the nodes.3. How many independent values are contained in the joint probability distribution for eight Boolean nodes, assuming that no conditional independence relations are known to hold among them?4. How many independent probability values do your network tables contain?5. The conditional distribution for ${Starts}$ could be described as a noisy-AND distribution. Define this family in general and relate it to the noisy-OR distribution.Exercise 10 Consider a simple Bayesian network with root variables ${Cold}$,${Flu}$, and ${Malaria}$ and child variable ${Fever}$, with anoisy-OR conditional distribution for ${Fever}$ as described inSection canonical-distribution-section. By addingappropriate auxiliary variables for inhibition events and fever-inducingevents, construct an equivalent Bayesian network whose CPTs (except forroot variables) are deterministic. Define the CPTs and proveequivalence.Exercise 11 (LG-exercise) Consider the family of linear Gaussian networks, asdefined on page LG-network-page.1. In a two-variable network, let $X_1$ be the parent of $X_2$, let $X_1$ have a Gaussian prior, and let ${textbf{P}}(X_2X_1)$ be a linear Gaussian distribution. Show that the joint distribution $P(X_1,X_2)$ is a multivariate Gaussian, and calculate its covariance matrix.2. Prove by induction that the joint distribution for a general linear Gaussian network on $X_1,ldots,X_n$ is also a multivariate Gaussian.Exercise 12 (multivalued-probit-exercise)The probit distribution defined onpage probit-page describes the probability distribution for a Booleanchild, given a single continuous parent.1. How might the definition be extended to cover multiple continuous parents?2. How might it be extended to handle a multivalued child variable? Consider both cases where the child’s values are ordered (as in selecting a gear while driving, depending on speed, slope, desired acceleration, etc.) and cases where they are unordered (as in selecting bus, train, or car to get to work). (Hint: Consider ways to divide the possible values into two sets, to mimic a Boolean variable.)Exercise 13 In your local nuclear power station, there is an alarm that senses whena temperature gauge exceeds a given threshold. The gauge measures thetemperature of the core. Consider the Boolean variables $A$ (alarmsounds), $F_A$ (alarm is faulty), and $F_G$ (gauge is faulty) and themultivalued nodes $G$ (gauge reading) and $T$ (actual core temperature).1. Draw a Bayesian network for this domain, given that the gauge is more likely to fail when the core temperature gets too high.2. Is your network a polytree? Why or why not?3. Suppose there are just two possible actual and measured temperatures, normal and high; the probability that the gauge gives the correct temperature is $x$ when it is working, but $y$ when it is faulty. Give the conditional probability table associated with $G$.4. Suppose the alarm works correctly unless it is faulty, in which case it never sounds. Give the conditional probability table associated with $A$.5. Suppose the alarm and gauge are working and the alarm sounds. Calculate an expression for the probability that the temperature of the core is too high, in terms of the various conditional probabilities in the network.Exercise 14 (telescope-exercise) Two astronomers in different parts of the worldmake measurements $M_1$ and $M_2$ of the number of stars $N$ in somesmall region of the sky, using their telescopes. Normally, there is asmall possibility $e$ of error by up to one star in each direction. Eachtelescope can also (with a much smaller probability $f$) be badly out offocus (events $F_1$ and $F_2$), in which case the scientist willundercount by three or more stars (or if $N$ is less than 3, fail todetect any stars at all). Consider the three networks shown inFigure telescope-nets-figure.1. Which of these Bayesian networks are correct (but not necessarily efficient) representations of the preceding information?2. Which is the best network? Explain.3. Write out a conditional distribution for ${textbf{P}}(M_1N)$, for the case where $N{1,2,3}$ and $M_1{0,1,2,3,4}$. Each entry in the conditional distribution should be expressed as a function of the parameters $e$ and/or $f$.4. Suppose $M_11$ and $M_23$. What are the possible numbers of stars if you assume no prior constraint on the values of $N$?5. What is the most likely number of stars, given these observations? Explain how to compute this, or if it is not possible to compute, explain what additional information is needed and how it would affect the result.Exercise 15 Consider the network shown inFigure telescope-nets-figure(ii), and assume that thetwo telescopes work identically. $N{1,2,3}$ and$M_1,M_2{0,1,2,3,4}$, with the symbolic CPTs as describedin Exercise telescope-exercise. Using the enumerationalgorithm (Figure enumeration-algorithm onpage enumeration-algorithm), calculate the probability distribution${textbf{P}}(NM_12,M_22)$. Three possible networks for the telescope problem.Exercise 16 Consider the Bayes net shown in Figure politics-figure.1. Which of the following are asserted by the network structure? 1. ${textbf{P}}(B,I,M) = {textbf{P}}(B){textbf{P}}(I){textbf{P}}(M)$. 2. ${textbf{P}}(J|G) = {textbf{P}}(J|G,I)$. 3. ${textbf{P}}(M|G,B,I) = {textbf{P}}(M|G,B,I,J)$.2. Calculate the value of $P(b,i,lnot m,g,j)$.3. Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.4. A context-specific independence (see page CSI-page) allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in Figure politics-figure?5. Suppose we want to add the variable $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add. A simple Bayes net with Boolean variables B = {BrokeElectionLaw}, I = {Indicted}, M = {PoliticallyMotivatedProsecutor}, G= {FoundGuilty}, J = {Jailed}.Exercise 17 Consider the Bayes net shown in Figure politics-figure.1. Which of the following are asserted by the network structure? 1. ${textbf{P}}(B,I,M) = {textbf{P}}(B){textbf{P}}(I){textbf{P}}(M)$. 2. ${textbf{P}}(J|G) = {textbf{P}}(J|G,I)$. 3. ${textbf{P}}(M|G,B,I) = {textbf{P}}(M|G,B,I,J)$.2. Calculate the value of $P(b,i,lnot m,g,j)$.3. Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.4. A context-specific independence (see page CSI-page) allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in Figure politics-figure?5. Suppose we want to add the variable $P={PresidentialPardon}$ to the network; draw the new network and briefly explain any links you add.Exercise 18 (VE-exercise) Consider the variable elimination algorithm inFigure elimination-ask-algorithm (page elimination-ask-algorithm).1. Section exact-inference-section applies variable elimination to the query $${textbf{P}}({Burglary}{JohnCalls}{true},{MaryCalls}{true}) .$$ Perform the calculations indicated and check that the answer is correct.2. Count the number of arithmetic operations performed, and compare it with the number performed by the enumeration algorithm.3. Suppose a network has the form of a chain: a sequence of Boolean variables $X_1,ldots, X_n$ where ${Parents}(X_i){X_{i-1}}$ for $i2,ldots,n$. What is the complexity of computing ${textbf{P}}(X_1X_n{true})$ using enumeration? Using variable elimination?4. Prove that the complexity of running variable elimination on a polytree network is linear in the size of the tree for any variable ordering consistent with the network structure.Exercise 19 (bn-complexity-exercise) Investigate the complexity of exact inferencein general Bayesian networks:1. Prove that any 3-SAT problem can be reduced to exact inference in a Bayesian network constructed to represent the particular problem and hence that exact inference is NP-hard. (Hint: Consider a network with one variable for each proposition symbol, one for each clause, and one for the conjunction of clauses.)2. The problem of counting the number of satisfying assignments for a 3-SAT problem is #P-complete. Show that exact inference is at least as hard as this.Exercise 20 (primitive-sampling-exercise) Consider the problem of generating arandom sample from a specified distribution on a single variable. Assumeyou have a random number generator that returns a random numberuniformly distributed between 0 and 1.1. Let $X$ be a discrete variable with $P(Xx_i)p_i$ for $i{1,ldots,k}$. The cumulative distribution of $X$ gives the probability that $X{x_1,ldots,x_j}$ for each possible $j$. (See also Appendix [math-appendix].) Explain how to calculate the cumulative distribution in $O(k)$ time and how to generate a single sample of $X$ from it. Can the latter be done in less than $O(k)$ time?2. Now suppose we want to generate $N$ samples of $X$, where $Ngg k$. Explain how to do this with an expected run time per sample that is constant (i.e., independent of $k$).3. Now consider a continuous-valued variable with a parameterized distribution (e.g., Gaussian). How can samples be generated from such a distribution?4. Suppose you want to query a continuous-valued variable and you are using a sampling algorithm such as LIKELIHOODWEIGHTING to do the inference. How would you have to modify the query-answering process?Exercise 21 Consider the query${textbf{P}}({Rain}{Sprinkler}{true},{WetGrass}{true})$in Figure rain-clustering-figure(a)(page rain-clustering-figure) and how Gibbs sampling can answer it.1. How many states does the Markov chain have?2. Calculate the transition matrix ${textbf{Q}}$ containing $q({textbf{y}}$ $rightarrow$ ${textbf{y}}')$ for all ${textbf{y}}$, ${textbf{y}}'$.3. What does ${textbf{ Q}}^2$, the square of the transition matrix, represent?4. What about ${textbf{Q}}^n$ as $nto infty$?5. Explain how to do probabilistic inference in Bayesian networks, assuming that ${textbf{Q}}^n$ is available. Is this a practical way to do inference?Exercise 22 (gibbs-proof-exercise) This exercise explores the stationarydistribution for Gibbs sampling methods.1. The convex composition $[alpha, q_1; 1-alpha, q_2]$ of $q_1$ and $q_2$ is a transition probability distribution that first chooses one of $q_1$ and $q_2$ with probabilities $alpha$ and $1-alpha$, respectively, and then applies whichever is chosen. Prove that if $q_1$ and $q_2$ are in detailed balance with $pi$, then their convex composition is also in detailed balance with $pi$. (Note: this result justifies a variant of GIBBS-ASK in which variables are chosen at random rather than sampled in a fixed sequence.)2. Prove that if each of $q_1$ and $q_2$ has $pi$ as its stationary distribution, then the sequential composition $q q_1 circ q_2$ also has $pi$ as its stationary distribution.Exercise 23 (MH-exercise) The Metropolis--Hastings algorithm is a member of the MCMC family; as such,it is designed to generate samples $textbf{x}$ (eventually) according to targetprobabilities $pi(textbf{x})$. (Typically we are interested in sampling from$pi(textbf{x})P(textbf{x}textbf{e})$.) Like simulated annealing,Metropolis–Hastings operates in two stages. First, it samples a newstate $textbf{x'}$ from a proposal distribution $q(textbf{x'}textbf{x})$, given the current state $textbf{x}$.Then, it probabilistically accepts or rejects $textbf{x'}$ according to the acceptance probability$$alpha(textbf{x'}textbf{x}) = min left(1,frac{pi(textbf{x'})q(textbf{x}textbf{x'})}{pi(textbf{x})q(textbf{x'}textbf{x})} right) .$$If the proposal is rejected, the state remains at $textbf{x}$.1. Consider an ordinary Gibbs sampling step for a specific variable $X_i$. Show that this step, considered as a proposal, is guaranteed to be accepted by Metropolis–Hastings. (Hence, Gibbs sampling is a special case of Metropolis–Hastings.)2. Show that the two-step process above, viewed as a transition probability distribution, is in detailed balance with $pi$.Exercise 24 (soccer-rpm-exercise) Three soccer teams $A$, $B$, and $C$, play eachother once. Each match is between two teams, and can be won, drawn, orlost. Each team has a fixed, unknown degree of quality—an integerranging from 0 to 3—and the outcome of a match depends probabilisticallyon the difference in quality between the two teams.1. Construct a relational probability model to describe this domain, and suggest numerical values for all the necessary probability distributions.2. Construct the equivalent Bayesian network for the three matches.3. Suppose that in the first two matches $A$ beats $B$ and draws with $C$. Using an exact inference algorithm of your choice, compute the posterior distribution for the outcome of the third match.4. Suppose there are $n$ teams in the league and we have the results for all but the last match. How does the complexity of predicting the last game vary with $n$?5. Investigate the application of MCMC to this problem. How quickly does it converge in practice and how well does it scale?Exercise 1 (state-augmentation-exercise)Show that any second-order Markovprocess can be rewritten as a first-order Markov process with anaugmented set of state variables. Can this always be doneparsimoniously, i.e., without increasing the number ofparameters needed to specify the transition model?Exercise 2 (markov-convergence-exercise) In this exercise, we examine whathappens to the probabilities in the umbrella world in the limit of longtime sequences.1. Suppose we observe an unending sequence of days on which the umbrella appears. Show that, as the days go by, the probability of rain on the current day increases monotonically toward a fixed point. Calculate this fixed point.2. Now consider forecasting further and further into the future, given just the first two umbrella observations. First, compute the probability $P(r_{2+k}|u_1,u_2)$ for $k=1 ldots 20$ and plot the results. You should see that the probability converges towards a fixed point. Prove that the exact value of this fixed point is 0.5.Exercise 3 (island-exercise) This exercise develops a space-efficient variant ofthe forward–backward algorithm described inFigure forward-backward-algorithm (page forward-backward-algorithm).We wish to compute $textbf{P} (textbf{X}_k|textbf{e}_{1:t})$ for$k=1,ldots ,t$. This will be done with a divide-and-conquerapproach.1. Suppose, for simplicity, that $t$ is odd, and let the halfway point be $h=(t+1)/2$. Show that $textbf{P} (textbf{X}_k|textbf{e}_{1:t}) $ can be computed for $k=1,ldots ,h$ given just the initial forward message $textbf{f}_{1:0}$, the backward message $textbf{b}_{h+1:t}$, and the evidence $textbf{e}_{1:h}$.2. Show a similar result for the second half of the sequence.3. Given the results of (a) and (b), a recursive divide-and-conquer algorithm can be constructed by first running forward along the sequence and then backward from the end, storing just the required messages at the middle and the ends. Then the algorithm is called on each half. Write out the algorithm in detail.4. Compute the time and space complexity of the algorithm as a function of $t$, the length of the sequence. How does this change if we divide the input into more than two pieces?Exercise 4 (flawed-viterbi-exercise) On page flawed-viterbi-page, we outlined a flawedprocedure for finding the most likely state sequence, given anobservation sequence. The procedure involves finding the most likelystate at each time step, using smoothing, and returning the sequencecomposed of these states. Show that, for some temporal probabilitymodels and observation sequences, this procedure returns an impossiblestate sequence (i.e., the posterior probability of the sequence iszero).Exercise 5 (hmm-likelihood-exercise) Equation (matrix-filtering-equation) describes thefiltering process for the matrix formulation of HMMs. Give a similarequation for the calculation of likelihoods, which was describedgenerically in Equation (forward-likelihood-equation).Exercise 6 Consider the vacuum worlds ofFigure vacuum-maze-ch4-figure (perfect sensing) andFigure vacuum-maze-hmm2-figure (noisy sensing). Supposethat the robot receives an observation sequence such that, with perfectsensing, there is exactly one possible location it could be in. Is thislocation necessarily the most probable location under noisy sensing forsufficiently small noise probability $epsilon$? Prove your claim orfind a counterexample.Exercise 7 (hmm-robust-exercise) In Section hmm-localization-section, the priordistribution over locations is uniform and the transition model assumesan equal probability of moving to any neighboring square. What if thoseassumptions are wrong? Suppose that the initial location is actuallychosen uniformly from the northwest quadrant of the room and the actionactually tends to move southeast. Keepingthe HMM model fixed, explore the effect on localization and pathaccuracy as the southeasterly tendency increases, for different valuesof $epsilon$.Exercise 8 (roomba-viterbi-exercise)Consider a version of the vacuum robot(page vacuum-maze-hmm2-figure) that has the policy of going straight for as longas it can; only when it encounters an obstacle does it change to a new(randomly selected) heading. To model this robot, each state in themodel consists of a (location, heading) pair. Implementthis model and see how well the Viterbi algorithm can track a robot withthis model. The robot’s policy is more constrained than the random-walkrobot; does that mean that predictions of the most likely path are moreaccurate?Exercise 9 We have described three policies for the vacuum robot: (1) a uniformrandom walk, (2) a bias for wandering southeast, as described inExercise hmm-robust-exercise, and (3) the policydescribed in Exercise roomba-viterbi-exercise. Supposean observer is given the observation sequence from a vacuum robot, butis not sure which of the three policies the robot is following. Whatapproach should the observer use to find the most likely path, given theobservations? Implement the approach and test it. How much does thelocalization accuracy suffer, compared to the case in which the observerknows which policy the robot is following?Exercise 10 This exercise is concerned with filtering in an environment with nolandmarks. Consider a vacuum robot in an empty room, represented by an$n times m$ rectangular grid. The robot’s location is hidden; the onlyevidence available to the observer is a noisy location sensor that givesan approximation to the robot’s location. If the robot is at location$(x, y)$ then with probability .1 the sensor gives the correct location,with probability .05 each it reports one of the 8 locations immediatelysurrounding $(x, y)$, with probability .025 each it reports one of the16 locations that surround those 8, and with the remaining probabilityof .1 it reports “no reading.” The robot’s policy is to pick a directionand follow it with probability .8 on each step; the robot switches to arandomly selected new heading with probability .2 (or with probability 1if it encounters a wall). Implement this as an HMM and do filtering totrack the robot. How accurately can we track the robot’s path?Exercise 11 This exercise is concerned with filtering in an environment with nolandmarks. Consider a vacuum robot in an empty room, represented by an$n times m$ rectangular grid. The robot’s location is hidden; the onlyevidence available to the observer is a noisy location sensor that givesan approximation to the robot’s location. If the robot is at location$(x, y)$ then with probability .1 the sensor gives the correct location,with probability .05 each it reports one of the 8 locations immediatelysurrounding $(x, y)$, with probability .025 each it reports one of the16 locations that surround those 8, and with the remaining probabilityof .1 it reports “no reading.” The robot’s policy is to pick a directionand follow it with probability .7 on each step; the robot switches to arandomly selected new heading with probability .3 (or with probability 1if it encounters a wall). Implement this as an HMM and do filtering totrack the robot. How accurately can we track the robot’s path? A Bayesian network representation of a switching Kalman filter. The switching variable $S_t$ is a discrete state variable whose value determines the transition model for the continuous state variables $textbf{X}_t$. For any discrete state $textit{i}$, the transition model $textbf{P}(textbf{X}_{t+1}|textbf{X}_t,S_t= i)$ is a linear Gaussian model, just as in a regular Kalman filter. The transition model for the discrete state, $textbf{P}(S_{t+1}|S_t)$, can be thought of as a matrix, as in a hidden Markov model.Exercise 12 (switching-kf-exercise) Often, we wish to monitor a continuous-statesystem whose behavior switches unpredictably among a set of $k$ distinct“modes.” For example, an aircraft trying to evade a missile can executea series of distinct maneuvers that the missile may attempt to track. ABayesian network representation of such a switching Kalmanfilter model is shown inFigure switching-kf-figure.1. Suppose that the discrete state $S_t$ has $k$ possible values and that the prior continuous state estimate ${textbf{P}}(textbf{X}_0)$ is a multivariate Gaussian distribution. Show that the prediction ${textbf{P}}(textbf{X}_1)$ is a mixture of Gaussians—that is, a weighted sum of Gaussians such that the weights sum to 1.2. Show that if the current continuous state estimate ${textbf{P}}(textbf{X}_t|textbf{e}_{1:t})$ is a mixture of $m$ Gaussians, then in the general case the updated state estimate ${textbf{P}}(textbf{X}_{t+1}|textbf{e}_{1:t+1})$ will be a mixture of $km$ Gaussians.3. What aspect of the temporal process do the weights in the Gaussian mixture represent?The results in (a) and (b) show that the representation of the posteriorgrows without limit even for switching Kalman filters, which are amongthe simplest hybrid dynamic models.Exercise 13 (kalman-update-exercise) Complete the missing step in the derivationof Equation (kalman-one-step-equation) onpage kalman-one-step-equation, the first update step for the one-dimensional Kalmanfilter.Exercise 14 (kalman-variance-exercise)Let us examine the behavior of the varianceupdate in Equation (kalman-univariate-equation)(page kalman-univariate-equation).1. Plot the value of $sigma_t^2$ as a function of $t$, given various values for $sigma_x^2$ and $sigma_z^2$.2. Show that the update has a fixed point $sigma^2$ such that $sigma_t^2 rightarrow sigma^2$ as $t rightarrow infty$, and calculate the value of $sigma^2$.3. Give a qualitative explanation for what happens as $sigma_x^2rightarrow 0$ and as $sigma_z^2rightarrow 0$.Exercise 15 (sleep1-exercise) A professor wants to know if students are gettingenough sleep. Each day, the professor observes whether the studentssleep in class, and whether they have red eyes. The professor has thefollowing domain theory:- The prior probability of getting enough sleep, with no observations, is 0.7.- The probability of getting enough sleep on night $t$ is 0.8 given that the student got enough sleep the previous night, and 0.3 if not.- The probability of having red eyes is 0.2 if the student got enough sleep, and 0.7 if not.- The probability of sleeping in class is 0.1 if the student got enough sleep, and 0.3 if not.Formulate this information as a dynamic Bayesian network that theprofessor could use to filter or predict from a sequence ofobservations. Then reformulate it as a hidden Markov model that has onlya single observation variable. Give the complete probability tables forthe model.Exercise 16 A professor wants to know if students are gettingenough sleep. Each day, the professor observes whether the studentssleep in class, and whether they have red eyes. The professor has thefollowing domain theory:- The prior probability of getting enough sleep, with no observations, is 0.7.- The probability of getting enough sleep on night $t$ is 0.8 given that the student got enough sleep the previous night, and 0.3 if not.- The probability of having red eyes is 0.2 if the student got enough sleep, and 0.7 if not.- The probability of sleeping in class is 0.1 if the student got enough sleep, and 0.3 if not.Formulate this information as a dynamic Bayesian network that theprofessor could use to filter or predict from a sequence ofobservations. Then reformulate it as a hidden Markov model that has onlya single observation variable. Give the complete probability tables forthe model.Exercise 17 For the DBN specified in Exercise sleep1-exercise andfor the evidence values$textbf{e}_1 = notspace redspace eyes,space notspace sleepingspace inspace class$$textbf{e}_2 = redspace eyes,space notspace sleepingspace inspace class$$textbf{e}_3 = redspace eyes,space sleepingspace inspace class$perform the following computations:1. State estimation: Compute $P({EnoughSleep}_t | textbf{e}_{1:t})$ for each of $t = 1,2,3$.2. Smoothing: Compute $P({EnoughSleep}_t | textbf{e}_{1:3})$ for each of $t = 1,2,3$.3. Compare the filtered and smoothed probabilities for $t=1$ and $t=2$.Exercise 18 Suppose that a particular student shows up with red eyes and sleeps inclass every day. Given the model described inExercise sleep1-exercise, explain why the probabilitythat the student had enough sleep the previous night converges to afixed point rather than continuing to go down as we gather more days ofevidence. What is the fixed point? Answer this both numerically (bycomputation) and analytically.Exercise 19 (battery-sequence-exercise) This exercise analyzes in more detail thepersistent-failure model for the battery sensor inFigure battery-persistence-figure(a)(page battery-persistence-figure).1. Figure battery-persistence-figure(b) stops at $t=32$. Describe qualitatively what should happen as $ttoinfty$ if the sensor continues to read 0.2. Suppose that the external temperature affects the battery sensor in such a way that transient failures become more likely as temperature increases. Show how to augment the DBN structure in Figure battery-persistence-figure(a), and explain any required changes to the CPTs.3. Given the new network structure, can battery readings be used by the robot to infer the current temperature?Exercise 20 (dbn-elimination-exercise) Consider applying the variable eliminationalgorithm to the umbrella DBN unrolled for three slices, where the queryis ${textbf{P}}(R_3|u_1,u_2,u_3)$. Show that the spacecomplexity of the algorithm—the size of the largest factor—is the same,regardless of whether the rain variables are eliminated in forward orbackward order.Exercise 1 (almanac-game) (Adapted from David Heckerman.) This exercise concernsthe Almanac Game, which is used bydecision analysts to calibrate numeric estimation. For each of thequestions that follow, give your best guess of the answer, that is, anumber that you think is as likely to be too high as it is to be toolow. Also give your guess at a 25th percentile estimate, that is, anumber that you think has a 25% chance of being too high, and a 75%chance of being too low. Do the same for the 75th percentile. (Thus, youshould give three estimates in all—low, median, and high—for eachquestion.)1. Number of passengers who flew between New York and Los Angeles in 1989.2. Population of Warsaw in 1992.3. Year in which Coronado discovered the Mississippi River.4. Number of votes received by Jimmy Carter in the 1976 presidential election.5. Age of the oldest living tree, as of 2002.6. Height of the Hoover Dam in feet.7. Number of eggs produced in Oregon in 1985.8. Number of Buddhists in the world in 1992.9. Number of deaths due to AIDS in the United States in 1981.10. Number of U.S. patents granted in 1901.The correct answers appear after the last exercise of this chapter. Fromthe point of view of decision analysis, the interesting thing is not howclose your median guesses came to the real answers, but rather how oftenthe real answer came within your 25% and 75% bounds. If it was abouthalf the time, then your bounds are accurate. But if you’re like mostpeople, you will be more sure of yourself than you should be, and fewerthan half the answers will fall within the bounds. With practice, youcan calibrate yourself to give realistic bounds, and thus be more usefulin supplying information for decision making. Try this second set ofquestions and see if there is any improvement:1. Year of birth of Zsa Zsa Gabor.2. Maximum distance from Mars to the sun in miles.3. Value in dollars of exports of wheat from the United States in 1992.4. Tons handled by the port of Honolulu in 1991.5. Annual salary in dollars of the governor of California in 1993.6. Population of San Diego in 1990.7. Year in which Roger Williams founded Providence, Rhode Island.8. Height of Mt. Kilimanjaro in feet.9. Length of the Brooklyn Bridge in feet.10. Number of deaths due to automobile accidents in the United States in 1992.Exercise 2 Chris considers four used cars before buying the one with maximumexpected utility. Pat considers ten cars and does the same. All otherthings being equal, which one is more likely to have the better car?Which is more likely to be disappointed with their car’s quality? By howmuch (in terms of standard deviations of expected quality)?Exercise 3 Chris considers five used cars before buying the one with maximumexpected utility. Pat considers eleven cars and does the same. All otherthings being equal, which one is more likely to have the better car?Which is more likely to be disappointed with their car’s quality? By howmuch (in terms of standard deviations of expected quality)?Exercise 4 (St-Petersburg-exercise) In 1713, Nicolas Bernoulli stated a puzzle,now called the St. Petersburg paradox, which works as follows. You havethe opportunity to play a game in which a fair coin is tossed repeatedlyuntil it comes up heads. If the first heads appears on the $n$th toss,you win $2^n$ dollars.1. Show that the expected monetary value of this game is infinite.2. How much would you, personally, pay to play the game?3. Nicolas’s cousin Daniel Bernoulli resolved the apparent paradox in 1738 by suggesting that the utility of money is measured on a logarithmic scale (i.e., $U(S_{n}) = alog_2 n +b$, where $S_n$ is the state of having $n$). What is the expected utility of the game under this assumption?4. What is the maximum amount that it would be rational to pay to play the game, assuming that one’s initial wealth is $k$?Exercise 5 Write a computer program to automate the process inExercise assessment-exercise. Try your program out onseveral people of different net worth and political outlook. Comment onthe consistency of your results, both for an individual and acrossindividuals.Exercise 6 (surprise-candy-exercise) The Surprise Candy Company makes candy intwo flavors: 75% are strawberry flavor and 25% are anchovy flavor. Eachnew piece of candy starts out with a round shape; as it moves along theproduction line, a machine randomly selects a certain percentage to betrimmed into a square; then, each piece is wrapped in a wrapper whosecolor is chosen randomly to be red or brown. 70% of the strawberrycandies are round and 70% have a red wrapper, while 90% of the anchovycandies are square and 90% have a brown wrapper. All candies are soldindividually in sealed, identical, black boxes.Now you, the customer, have just bought a Surprise candy at the storebut have not yet opened the box. Consider the three Bayes nets inFigure 3candy-figure.1. Which network(s) can correctly represent ${textbf{P}}(Flavor,Wrapper,Shape)$?2. Which network is the best representation for this problem?3. Does network (i) assert that ${textbf{P}}(Wrapper|Shape){textbf{P}}(Wrapper)$?4. What is the probability that your candy has a red wrapper?5. In the box is a round candy with a red wrapper. What is the probability that its flavor is strawberry?6. A unwrapped strawberry candy is worth $s$ on the open market and an unwrapped anchovy candy is worth $a$. Write an expression for the value of an unopened candy box.7. A new law prohibits trading of unwrapped candies, but it is still legal to trade wrapped candies (out of the box). Is an unopened candy box now worth more than less than, or the same as before? Three proposed Bayes nets for the Surprise Candy problem Exercise 7 (surprise-candy-exercise) The Surprise Candy Company makes candy intwo flavors: 70% are strawberry flavor and 30% are anchovy flavor. Eachnew piece of candy starts out with a round shape; as it moves along theproduction line, a machine randomly selects a certain percentage to betrimmed into a square; then, each piece is wrapped in a wrapper whosecolor is chosen randomly to be red or brown. 80% of the strawberrycandies are round and 80% have a red wrapper, while 90% of the anchovycandies are square and 90% have a brown wrapper. All candies are soldindividually in sealed, identical, black boxes.Now you, the customer, have just bought a Surprise candy at the storebut have not yet opened the box. Consider the three Bayes nets inFigure 3candy-figure.1. Which network(s) can correctly represent ${textbf{P}}(Flavor,Wrapper,Shape)$?2. Which network is the best representation for this problem?3. Does network (i) assert that ${textbf{P}}(Wrapper|Shape){textbf{P}}(Wrapper)$?4. What is the probability that your candy has a red wrapper?5. In the box is a round candy with a red wrapper. What is the probability that its flavor is strawberry?6. A unwrapped strawberry candy is worth $s$ on the open market and an unwrapped anchovy candy is worth $a$. Write an expression for the value of an unopened candy box.7. A new law prohibits trading of unwrapped candies, but it is still legal to trade wrapped candies (out of the box). Is an unopened candy box now worth more than less than, or the same as before?Exercise 8 Prove that the judgments $B succ A$ and $C succ D$ in the Allaisparadox (page allais-page) violate the axiom of substitutability.Exercise 9 Consider the Allais paradox described on page allais-page: an agentwho prefers $B$ over $A$ (taking the sure thing), and $C$ over $D$(taking the higher EMV) is not acting rationally, according to utilitytheory. Do you think this indicates a problem for the agent, a problemfor the theory, or no problem at all? Explain.Exercise 10 Tickets to a lottery cost 1. There are two possible prizes:a 10 payoff with probability 1/50, and a 1,000,000 payoff withprobability 1/2,000,000. What is the expected monetary value of alottery ticket? When (if ever) is it rational to buy a ticket? Beprecise—show an equation involving utilities. You may assume currentwealth of $k$ and that $U(S_k)=0$. You may also assume that$U(S_{k+{10}}) = {10}times U(S_{k+1})$, but you may not make anyassumptions about $U(S_{k+1,{000},{000}})$. Sociological studies showthat people with lower income buy a disproportionate number of lotterytickets. Do you think this is because they are worse decision makers orbecause they have a different utility function? Consider the value ofcontemplating the possibility of winning the lottery versus the value ofcontemplating becoming an action hero while watching an adventure movie.Exercise 11 (assessment-exercise) Assess your own utility for different incrementalamounts of money by running a series of preference tests between somedefinite amount $M_1$ and a lottery $[p,M_2; (1-p), 0]$. Choosedifferent values of $M_1$ and $M_2$, and vary $p$ until you areindifferent between the two choices. Plot the resulting utilityfunction.Exercise 12 How much is a micromort worth to you? Devise a protocol to determinethis. Ask questions based both on paying to avoid risk and being paid toaccept risk.Exercise 13 (kmax-exercise) Let continuous variables $X_1,ldots,X_k$ beindependently distributed according to the same probability densityfunction $f(x)$. Prove that the density function for$max{X_1,ldots,X_k}$ is given by $kf(x)(F(x))^{k-1}$, where $F$ isthe cumulative distribution for $f$.Exercise 14 Economists often make use of an exponential utility function for money:$U(x) = -e^{-x/R}$, where $R$ is a positive constant representing anindividual’s risk tolerance. Risk tolerance reflects how likely anindividual is to accept a lottery with a particular expected monetaryvalue (EMV) versus some certain payoff. As $R$ (which is measured in thesame units as $x$) becomes larger, the individual becomes lessrisk-averse.1. Assume Mary has an exponential utility function with $R = $500$. Mary is given the choice between receiving $$500$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning $5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.2. Consider the choice between receiving $$100$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning $$500$ and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to write a short program to help you solve this problem.)Exercise 15 Economists often make use of an exponential utility function for money:$U(x) = -e^{-x/R}$, where $R$ is a positive constant representing anindividual’s risk tolerance. Risk tolerance reflects how likely anindividual is to accept a lottery with a particular expected monetaryvalue (EMV) versus some certain payoff. As $R$ (which is measured in thesame units as $x$) becomes larger, the individual becomes lessrisk-averse.1. Assume Mary has an exponential utility function with $R = $400$. Mary is given the choice between receiving $$400$ with certainty (probability 1) or participating in a lottery which has a 60% probability of winning $5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.2. Consider the choice between receiving $$100$ with certainty (probability 1) or participating in a lottery which has a 50% probability of winning $500 and a 50% probability of winning nothing. Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to write a short program to help you solve this problem.)Exercise 16 Alex is given the choice between two games. In Game 1, a fair coin isflipped and if it comes up heads, Alex receives $$100$. If the coin comesup tails, Alex receives nothing. In Game 2, a fair coin is flippedtwice. Each time the coin comes up heads, Alex receives $$50$, and Alexreceives nothing for each coin flip that comes up tails. Assuming thatAlex has a monotonically increasing utility function for money in therange [$0, $100], show mathematically that if Alex prefers Game 2 toGame 1, then Alex is risk averse (at least with respect to this range ofmonetary amounts).Show that if $X_1$ and $X_2$ are preferentially independent of $X_3$,and $X_2$ and $X_3$ are preferentially independent of $X_1$, then $X_3$and $X_1$ are preferentially independent of $X_2$.Exercise 17 (airport-au-id-exercise) Repeat Exercise airport-id-exercise, using the action-utilityrepresentation shown in Figure airport-au-id-figure.Exercise 18 For either of the airport-siting diagrams from Exercisesairport-id-exercise and airport-au-id-exercise, to whichconditional probability table entry is the utility most sensitive, giventhe available evidence?Exercise 19 Modify and extend the Bayesian network code in the code repository toprovide for creation and evaluation of decision networks and thecalculation of information value.Exercise 20 Consider a student who has the choice to buy or not buy a textbook for acourse. We’ll model this as a decision problem with one Boolean decisionnode, $B$, indicating whether the agent chooses to buy the book, and twoBoolean chance nodes, $M$, indicating whether the student has masteredthe material in the book, and $P$, indicating whether the student passesthe course. Of course, there is also a utility node, $U$. A certainstudent, Sam, has an additive utility function: 0 for not buying thebook and -$100 for buying it; and $2000 for passing the course and 0for not passing. Sam’s conditional probability estimates are as follows:$$begin{array}{ll}P(p|b,m) = 0.9 &amp; P(m|b) = 0.9 P(p|b, lnot m) = 0.5 &amp; P(m|lnot b) = 0.7 P(p|lnot b, m) = 0.8 &amp; P(p|lnot b, lnot m) = 0.3 &amp; end{array}$$You might think that $P$ would be independent of $B$ given$M$, But this course has an open-book final—so having the book helps.1. Draw the decision network for this problem.2. Compute the expected utility of buying the book and of not buying it.3. What should Sam do?Exercise 21 (airport-id-exercise) This exercise completes the analysis of theairport-siting problem in Figure airport-id-figure.1. Provide reasonable variable domains, probabilities, and utilities for the network, assuming that there are three possible sites.2. Solve the decision problem.3. What happens if changes in technology mean that each aircraft generates half the noise?4. What if noise avoidance becomes three times more important?5. Calculate the VPI for ${AirTraffic}$, ${Litigation}$, and ${Construction}$ in your model.Exercise 22 (car-vpi-exercise) (Adapted from Pearl [Pearl:1988].) A used-carbuyer can decide to carry out various tests with various costs (e.g.,kick the tires, take the car to a qualified mechanic) and then,depending on the outcome of the tests, decide which car to buy. We willassume that the buyer is deciding whether to buy car $c_1$, that thereis time to carry out at most one test, and that $t_1$ is the test of$c_1$ and costs $50.A car can be in good shape (quality $q^+$) or bad shape (quality $q^-$),and the tests might help indicate what shape the car is in. Car $c_1$costs $1,500, and its market value is $$2,000$ if it is in good shape; ifnot, $$700$ in repairs will be needed to make it in good shape. The buyer’sestimate is that $c_1$ has a 70% chance of being in good shape.1. Draw the decision network that represents this problem.2. Calculate the expected net gain from buying $c_1$, given no test.3. Tests can be described by the probability that the car will pass or fail the test given that the car is in good or bad shape. We have the following information: $P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$ $P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$ Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.4. Calculate the optimal decisions given either a pass or a fail, and their expected utilities.5. Calculate the value of information of the test, and derive an optimal conditional plan for the buyer.Exercise 23 (nonnegative-VPI-exercise) Recall the definition of value ofinformation in Section VPI-section.1. Prove that the value of information is nonnegative and order independent.2. Explain why it is that some people would prefer not to get some information—for example, not wanting to know the sex of their baby when an ultrasound is done.3. A function $f$ on sets is submodular if, for any element $x$ and any sets $A$ and $B$ such that $Asubseteq B$, adding $x$ to $A$ gives a greater increase in $f$ than adding $x$ to $B$: $$Asubseteq B Rightarrow (f(A cup {x}) - f(A)) geq (f(Bcup {x}) - f(B)) .$$ Submodularity captures the intuitive notion of diminishing returns. Is the value of information, viewed as a function $f$ on sets of possible observations, submodular? Prove this or find a counterexample.Exercise 1 (mdp-model-exercise) For the $4times 3$ world shown inFigure sequential-decision-world-figure., calculatewhich squares can be reached from (1,1) by the action sequence$[{Up},{Up},{Right},{Right},{Right}]$ and with whatprobabilities. Explain how this computation is related to the predictiontask (see Section general-filtering-section for ahidden Markov model.Exercise 2 (mdp-model-exercise) For the $4times 3$ world shown inFigure sequential-decision-world-figure, calculatewhich squares can be reached from (1,1) by the action sequence$[{Right},{Right},{Right},{Up},{Up}]$ and with whatprobabilities. Explain how this computation is related to the predictiontask (see Section general-filtering-section) for ahidden Markov model.Exercise 3 Select a specific member of the set of policies that are optimal for$R(s)&gt;0$ as shown inFigure sequential-decision-policies-figure(b), andcalculate the fraction of time the agent spends in each state, in thelimit, if the policy is executed forever. (Hint:Construct the state-to-state transition probability matrix correspondingto the policy and seeExercise markov-convergence-exercise.)Exercise 4 (nonseparable-exercise)Suppose that we define the utility of a statesequence to be the maximum reward obtained in any statein the sequence. Show that this utility function does not result instationary preferences between state sequences. Is it still possible todefine a utility function on states such that MEU decision making givesoptimal behavior?Exercise 5 Can any finite search problem be translated exactly into a Markovdecision problem such that an optimal solution of the latter is also anoptimal solution of the former? If so, explain preciselyhow to translate the problem and how to translate the solution back; ifnot, explain precisely why not (i.e., give acounterexample).Exercise 6 (reward-equivalence-exercise) Sometimes MDPs are formulated with areward function $R(s,a)$ that depends on the action taken or with areward function $R(s,a,s')$ that also depends on the outcome state.1. Write the Bellman equations for these formulations.2. Show how an MDP with reward function $R(s,a,s')$ can be transformed into a different MDP with reward function $R(s,a)$, such that optimal policies in the new MDP correspond exactly to optimal policies in the original MDP.3. Now do the same to convert MDPs with $R(s,a)$ into MDPs with $R(s)$.Exercise 7 (threshold-cost-exercise) For the environment shown inFigure sequential-decision-world-figure, find all thethreshold values for $R(s)$ such that the optimal policy changes whenthe threshold is crossed. You will need a way to calculate the optimalpolicy and its value for fixed $R(s)$. (Hint: Prove thatthe value of any fixed policy varies linearly with $R(s)$.)Exercise 8 (vi-contraction-exercise) Equation (vi-contraction-equation) onpage vi-contraction-equation states that the Bellman operator is a contraction.1. Show that, for any functions $f$ and $g$, $$|max_a f(a) - max_a g(a)| leq max_a |f(a) - g(a)| .$$2. Write out an expression for $$|(B,U_i - B,U'_i)(s)|$$ and then apply the result from (1) to complete the proof that the Bellman operator is a contraction.Exercise 9 This exercise considers two-player MDPs that correspond to zero-sum,turn-taking games like those inChapter game-playing-chapter. Let the players be $A$and $B$, and let $R(s)$ be the reward for player $A$ in state $s$. (Thereward for $B$ is always equal and opposite.)1. Let $U_A(s)$ be the utility of state $s$ when it is $A$’s turn to move in $s$, and let $U_B(s)$ be the utility of state $s$ when it is $B$’s turn to move in $s$. All rewards and utilities are calculated from $A$’s point of view (just as in a minimax game tree). Write down Bellman equations defining $U_A(s)$ and $U_B(s)$.2. Explain how to do two-player value iteration with these equations, and define a suitable termination criterion.3. Consider the game described in Figure line-game4-figure on page line-game4-figure. Draw the state space (rather than the game tree), showing the moves by $A$ as solid lines and moves by $B$ as dashed lines. Mark each state with $R(s)$. You will find it helpful to arrange the states $(s_A,s_B)$ on a two-dimensional grid, using $s_A$ and $s_B$ as “coordinates.”4. Now apply two-player value iteration to solve this game, and derive the optimal policy. (a) $3 times 3$ world for Exercise 3x3-mdp-exercise. The reward for each state is indicated. The upper right square is a terminal state. (b) $101 times 3$ world for Exercise 101x3-mdp-exercise (omitting 93 identical columns in the middle). The start state has reward 0. Exercise 10 (3x3-mdp-exercise) Consider the $3 times 3$ world shown inFigure grid-mdp-figure(a). The transition model is thesame as in the $4times 3$Figure sequential-decision-world-figure: 80% of thetime the agent goes in the direction it selects; the rest of the time itmoves at right angles to the intended direction.Implement value iteration for this world for each value of $r$ below.Use discounted rewards with a discount factor of 0.99. Show the policyobtained in each case. Explain intuitively why the value of $r$ leads toeach policy.1. $r = -100$2. $r = -3$3. $r = 0$4. $r = +3$Exercise 11 (101x3-mdp-exercise) Consider the $101 times 3$ world shown inFigure grid-mdp-figure(b). In the start state the agenthas a choice of two deterministic actions, Up orDown, but in the other states the agent has onedeterministic action, Right. Assuming a discounted rewardfunction, for what values of the discount $gamma$ should the agentchoose Up and for which Down? Compute theutility of each action as a function of $gamma$. (Note that this simpleexample actually reflects many real-world situations in which one mustweigh the value of an immediate action versus the potential continuallong-term consequences, such as choosing to dump pollutants into alake.)Exercise 12 Consider an undiscounted MDP having three states, (1, 2, 3), withrewards $-1$, $-2$, $0$, respectively. State 3 is a terminal state. Instates 1 and 2 there are two possible actions: $a$ and $b$. Thetransition model is as follows:- In state 1, action $a$ moves the agent to state 2 with probability 0.8 and makes the agent stay put with probability 0.2.- In state 2, action $a$ moves the agent to state 1 with probability 0.8 and makes the agent stay put with probability 0.2.- In either state 1 or state 2, action $b$ moves the agent to state 3 with probability 0.1 and makes the agent stay put with probability 0.9.Answer the following questions:1. What can be determined qualitatively about the optimal policy in states 1 and 2?2. Apply policy iteration, showing each step in full, to determine the optimal policy and the values of states 1 and 2. Assume that the initial policy has action $b$ in both states.3. What happens to policy iteration if the initial policy has action $a$ in both states? Does discounting help? Does the optimal policy depend on the discount factor?Exercise 13 Consider the $4times 3$ world shown inFigure sequential-decision-world-figure.1. Implement an environment simulator for this environment, such that the specific geography of the environment is easily altered. Some code for doing this is already in the online code repository.2. Create an agent that uses policy iteration, and measure its performance in the environment simulator from various starting states. Perform several experiments from each starting state, and compare the average total reward received per run with the utility of the state, as determined by your algorithm.3. Experiment with increasing the size of the environment. How does the run time for policy iteration vary with the size of the environment?Exercise 14 (policy-loss-exercise) How can the value determination algorithm beused to calculate the expected loss experienced by an agent using agiven set of utility estimates ${U}$ and an estimatedmodel ${P}$, compared with an agent using correct values?Exercise 15 (4x3-pomdp-exercise) Let the initial belief state $b_0$ for the$4times 3$ POMDP on page 4x3-pomdp-page be the uniform distributionover the nonterminal states, i.e.,$&lt; frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},frac{1}{9},0,0 &gt;$.Calculate the exact belief state $b_1$ after the agent moves and itssensor reports 1 adjacent wall. Also calculate $b_2$ assuming that thesame thing happens again.Exercise 16 What is the time complexity of $d$ steps of POMDP value iteration for asensorless environment?Exercise 17 (2state-pomdp-exercise) Consider a version of the two-state POMDP onpage 2state-pomdp-page in which the sensor is 90% reliable in state 0 butprovides no information in state 1 (that is, it reports 0 or 1 withequal probability). Analyze, either qualitatively or quantitatively, theutility function and the optimal policy for this problem.Exercise 18 (dominant-equilibrium-exercise) Show that a dominant strategyequilibrium is a Nash equilibrium, but not vice versa.Exercise 19 In the children’s game of rock–paper–scissors each player reveals at thesame time a choice of rock, paper, or scissors. Paper wraps rock, rockblunts scissors, and scissors cut paper. In the extended versionrock–paper–scissors–fire–water, fire beats rock, paper, and scissors;rock, paper, and scissors beat water; and water beats fire. Write outthe payoff matrix and find a mixed-strategy solution to this game.Exercise 20 Solve the game of three-finger Morra.Exercise 21 In the Prisoner’s Dilemma, consider the case where aftereach round, Alice and Bob have probability $X$ meeting again. Supposeboth players choose the perpetual punishment strategy (where each willchoose ${refuse}$ unless the other player has ever played${testify}$). Assume neither player has played ${testify}$ thus far.What is the expected future total payoff for choosing to ${testify}$versus ${refuse}$ when $X = .2$? How about when $X = .05$? For whatvalue of $X$ is the expected future total payoff the same whether onechooses to ${testify}$ or ${refuse}$ in the current round?Exercise 22 The following payoff matrix, from @Blinder:1983 by way of Bernstein:1996, shows a game betweenpoliticians and the Federal Reserve.$$begin{array} {|r|r|}hline &amp; Fed: contract &amp; Fed: do nothing &amp; Fed: expand hline Pol: contract &amp; F=7, P=1 &amp; F=9, P=4 &amp; F=6, P=6 Pol: do nothing &amp; F=8, P=2 &amp; F=5, P=5 &amp; F=4, P=9 Pol: expand &amp; F=3, P=3 &amp; F=2, P=7 &amp; F=1, P=8 hline end{array}$$Politicians can expand or contract fiscal policy, while the Fed canexpand or contract monetary policy. (And of course either side canchoose to do nothing.) Each side also has preferences for who should dowhat—neither side wants to look like the bad guys. The payoffs shown aresimply the rank orderings: 9 for first choice through 1 for last choice.Find the Nash equilibrium of the game in pure strategies. Is this aPareto-optimal solution? You might wish to analyze the policies ofrecent administrations in this light.Exercise 23 A Dutch auction is similar in an English auction, but rather thanstarting the bidding at a low price and increasing, in a Dutch auctionthe seller starts at a high price and gradually lowers the price untilsome buyer is willing to accept that price. (If multiple bidders acceptthe price, one is arbitrarily chosen as the winner.) More formally, theseller begins with a price $p$ and gradually lowers $p$ by increments of$d$ until at least one buyer accepts the price. Assuming all bidders actrationally, is it true that for arbitrarily small $d$, a Dutch auctionwill always result in the bidder with the highest value for the itemobtaining the item? If so, show mathematically why. If not, explain howit may be possible for the bidder with highest value for the item not toobtain it.Exercise 24 Imagine an auction mechanism that is just like an ascending-bid auction,except that at the end, the winning bidder, the one who bid $b_{max}$,pays only $b_{max}/2$ rather than $b_{max}$. Assuming all agents arerational, what is the expected revenue to the auctioneer for thismechanism, compared with a standard ascending-bid auction?Exercise 25 Teams in the National Hockey League historically received 2 points forwinning a game and 0 for losing. If the game is tied, an overtime periodis played; if nobody wins in overtime, the game is a tie and each teamgets 1 point. But league officials felt that teams were playing tooconservatively in overtime (to avoid a loss), and it would be moreexciting if overtime produced a winner. So in 1999 the officialsexperimented in mechanism design: the rules were changed, giving a teamthat loses in overtime 1 point, not 0. It is still 2 points for a winand 1 for a tie. 1. Was hockey a zero-sum game before the rule change? After?2. Suppose that at a certain time $t$ in a game, the home team has probability $p$ of winning in regulation time, probability $0.78-p$ of losing, and probability 0.22 of going into overtime, where they have probability $q$ of winning, $.9-q$ of losing, and .1 of tying. Give equations for the expected value for the home and visiting teams.3. Imagine that it were legal and ethical for the two teams to enter into a pact where they agree that they will skate to a tie in regulation time, and then both try in earnest to win in overtime. Under what conditions, in terms of $p$ and $q$, would it be rational for both teams to agree to this pact?4. Longley+Sankaran:2005 report that since the rule change, the percentage of games with a winner in overtime went up 18.2%, as desired, but the percentage of overtime games also went up 3.6%. What does that suggest about possible collusion or conservative play after the rule change?Exercise 1 (infant-language-exercise) Consider the problem faced by an infantlearning to speak and understand a language. Explain how this processfits into the general learning model. Describe the percepts and actionsof the infant, and the types of learning the infant must do. Describethe subfunctions the infant is trying to learn in terms of inputs andoutputs, and available example data.Exercise 2 Repeat Exercise infant-language-exercise for the caseof learning to play tennis (or some other sport with which you arefamiliar). Is this supervised learning or reinforcement learning?Exercise 3 Draw a decision tree for the problem of deciding whether to move forwardat a road intersection, given that the light has just turned green.Exercise 4 We never test the same attribute twice along one path in a decisiontree. Why not?Exercise 5 Suppose we generate a training set from a decision tree and then applydecision-tree learning to that training set. Is it the case that thelearning algorithm will eventually return the correct tree as thetraining-set size goes to infinity? Why or why not?Exercise 6 (leaf-classification-exercise) In the recursive construction ofdecision trees, it sometimes happens that a mixed set of positive andnegative examples remains at a leaf node, even after all the attributeshave been used. Suppose that we have $p$ positive examples and $n$negative examples.1. Show that the solution used by DECISION-TREE-LEARNING, which picks the majority classification, minimizes the absolute error over the set of examples at the leaf.2. Show that the class probability $p/(p+n)$ minimizes the sum of squared errors.Exercise 7 (nonnegative-gain-exercise) Suppose that an attribute splits the set ofexamples $E$ into subsets $E_k$ and that each subset has $p_k$positive examples and $n_k$ negative examples. Show that theattribute has strictly positive information gain unless the ratio$p_k/(p_k+n_k)$ is the same for all $k$.Exercise 8 Consider the following data set comprised of three binary inputattributes ($A_1, A_2$, and $A_3$) and one binary output:$$begin{array} {|r|r|}hline textbf{Example} &amp; A_1 &amp; A_2 &amp; A_3 &amp; Outputspace y hline textbf{x}_1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 textbf{x}_2 &amp; 1 &amp; 0 &amp; 1 &amp; 0 textbf{x}_3 &amp; 0 &amp; 1 &amp; 0 &amp; 0 textbf{x}_4 &amp; 1 &amp; 1 &amp; 1 &amp; 1 textbf{x}_5 &amp; 1 &amp; 1 &amp; 0 &amp; 1 hline end{array}$$Use the algorithm in Figure DTL-algorithm(page DTL-algorithm) to learn a decision tree for these data. Show thecomputations made to determine the attribute to split at each node.Exercise 9 Construct a data set (set of examples with attributes andclassifications) that would cause the decision-tree learning algorithmto find a non-minimal-sized tree. Show the tree constructed by thealgorithm and the minimal-sized tree that you can generate by hand.Exercise 10 A decision graph is a generalization of a decision treethat allows nodes (i.e., attributes used for splits) to have multipleparents, rather than just a single parent. The resulting graph muststill be acyclic. Now, consider the XOR function of threebinary input attributes, which produces the value 1 if and only if anodd number of the three input attributes has value 1.1. Draw a minimal-sized decision tree for the three-input XOR function.2. Draw a minimal-sized decision graph for the three-input XOR function.Exercise 11 (pruning-DTL-exercise) This exercise considers $chi^2$ pruning ofdecision trees (Section chi-squared-section.1. Create a data set with two input attributes, such that the information gain at the root of the tree for both attributes is zero, but there is a decision tree of depth 2 that is consistent with all the data. What would $chi^2$ pruning do on this data set if applied bottom up? If applied top down?2. Modify DECISION-TREE-LEARNING to include $chi^2$-pruning. You might wish to consult Quinlan [Quinlan:1986] or [Kearns+Mansour:1998] for details.Exercise 12 (missing-value-DTL-exercise) The standard DECISION-TREE-LEARNING algorithm described in thechapter does not handle cases in which some examples have missingattribute values.1. First, we need to find a way to classify such examples, given a decision tree that includes tests on the attributes for which values can be missing. Suppose that an example $textbf{x}$ has a missing value for attribute $A$ and that the decision tree tests for $A$ at a node that $textbf{x}$ reaches. One way to handle this case is to pretend that the example has all possible values for the attribute, but to weight each value according to its frequency among all of the examples that reach that node in the decision tree. The classification algorithm should follow all branches at any node for which a value is missing and should multiply the weights along each path. Write a modified classification algorithm for decision trees that has this behavior.2. Now modify the information-gain calculation so that in any given collection of examples $C$ at a given node in the tree during the construction process, the examples with missing values for any of the remaining attributes are given “as-if” values according to the frequencies of those values in the set $C$.Exercise 13 (gain-ratio-DTL-exercise) InSection broadening-decision-tree-section, we noted thatattributes with many different possible values can cause problems withthe gain measure. Such attributes tend to split the examples intonumerous small classes or even singleton classes, thereby appearing tobe highly relevant according to the gain measure. Thegain-ratio criterion selects attributesaccording to the ratio between their gain and their intrinsicinformation content—that is, the amount of information contained in theanswer to the question, “What is the value of this attribute?” Thegain-ratio criterion therefore tries to measure how efficiently anattribute provides information on the correct classification of anexample. Write a mathematical expression for the information content ofan attribute, and implement the gain ratio criterion in DECISION-TREE-LEARNING.Exercise 14 Suppose you are running a learning experiment on a new algorithm forBoolean classification. You have a data set consisting of 100 positiveand 100 negative examples. You plan to use leave-one-outcross-validation and compare your algorithm to a baseline function, asimple majority classifier. (A majority classifier is given a set oftraining data and then always outputs the class that is in the majorityin the training set, regardless of the input.) You expect the majorityclassifier to score about 50% on leave-one-out cross-validation, but toyour surprise, it scores zero every time. Can you explain why?Exercise 15 Suppose that a learning algorithm is trying to find a consistenthypothesis when the classifications of examples are actually random.There are $n$ Boolean attributes, and examples are drawn uniformly fromthe set of $2^n$ possible examples. Calculate the number of examplesrequired before the probability of finding a contradiction in the datareaches 0.5.Exercise 16 Construct a decision list to classify the data below.Select tests to be as small as possible (in terms of attributes),breaking ties among tests with the same number of attributes byselecting the one that classifies the greatest number of examplescorrectly. If multiple tests have the same number of attributes andclassify the same number of examples, then break the tie usingattributes with lower index numbers (e.g., select $A_1$ over $A_2$).$$begin{array} {|r|r|}hline textbf{Example} &amp; A_1 &amp; A_2 &amp; A_3 &amp; A_4 &amp; y hline textbf{x}_1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 1 textbf{x}_2 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 textbf{x}_3 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 textbf{x}_4 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 textbf{x}_5 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 1 textbf{x}_6 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 textbf{x}_7 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 1 textbf{x}_8 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 hline end{array}$$Exercise 17 Prove that a decision list can represent the same function as a decisiontree while using at most as many rules as there are leaves in thedecision tree for that function. Give an example of a functionrepresented by a decision list using strictly fewer rules than thenumber of leaves in a minimal-sized decision tree for that samefunction.Exercise 18 (DL-expressivity-exercise) This exercise concerns the expressiveness ofdecision lists (Section learning-theory-section).1. Show that decision lists can represent any Boolean function, if the size of the tests is not limited.2. Show that if the tests can contain at most $k$ literals each, then decision lists can represent any function that can be represented by a decision tree of depth $k$.Exercise 19 (knn-mean-mode) Suppose a $7$-nearest-neighbors regression searchreturns $ {7, 6, 8, 4, 7, 11, 100} $ as the 7 nearest $y$ values for agiven $x$ value. What is the value of $hat{y}$ that minimizes the $L_1$loss function on this data? There is a common name in statistics forthis value as a function of the $y$ values; what is it? Answer the sametwo questions for the $L_2$ loss function.Exercise 20 (knn-mean-mode) Suppose a $7$-nearest-neighbors regression searchreturns $ {4, 2, 8, 4, 9, 11, 100} $ as the 7 nearest $y$ values for agiven $x$ value. What is the value of $hat{y}$ that minimizes the $L_1$loss function on this data? There is a common name in statistics forthis value as a function of the $y$ values; what is it? Answer the sametwo questions for the $L_2$ loss function.Exercise 21 (svm-ellipse-exercise) Figure kernel-machine-figureshowed how a circle at the origin can be linearly separated by mappingfrom the features $(x_1, x_2)$ to the two dimensions $(x_1^2, x_2^2)$.But what if the circle is not located at the origin? What if it is anellipse, not a circle? The general equation for a circle (and hence thedecision boundary) is $(x_1-a)^2 +(x_2-b)^2 - r^20$, and the general equation for an ellipse is$c(x_1-a)^2 + d(x_2-b)^2 - 1 0$.1. Expand out the equation for the circle and show what the weights $w_i$ would be for the decision boundary in the four-dimensional feature space $(x_1, x_2, x_1^2, x_2^2)$. Explain why this means that any circle is linearly separable in this space.2. Do the same for ellipses in the five-dimensional feature space $(x_1, x_2, x_1^2, x_2^2, x_1 x_2)$.Exercise 22 (svm-exercise) Construct a support vector machine that computes thexor function. Use values of +1 and –1 (instead of 1 and 0)for both inputs and outputs, so that an example looks like $([-1, 1],1)$ or $([-1, -1], -1)$. Map the input $[x_1,x_2]$ into a spaceconsisting of $x_1$ and $x_1,x_2$. Draw the four input points in thisspace, and the maximal margin separator. What is the margin? Now drawthe separating line back in the original Euclidean input space.Exercise 23 (ensemble-error-exercise) Consider an ensemble learning algorithm thatuses simple majority voting among $K$ learned hypotheses.Suppose that each hypothesis has error $epsilon$ and that the errorsmade by each hypothesis are independent of the others’. Calculate aformula for the error of the ensemble algorithm in terms of $K$and $epsilon$, and evaluate it for the cases where$K=5$, 10, and 20 and $epsilon={0.1}$, 0.2,and 0.4. If the independence assumption is removed, is it possible forthe ensemble error to be worse than $epsilon$?Exercise 24 Construct by hand a neural network that computes the xorfunction of two inputs. Make sure to specify what sort of units you areusing.Exercise 25 A simple perceptron cannot represent xor (or, generally,the parity function of its inputs). Describe what happens to the weightsof a four-input, hard-threshold perceptron, beginning with all weightsset to 0.1, as examples of the parity function arrive.Exercise 26 (linear-separability-exercise) Recall fromChapter concept-learning-chapter that there are$2^{2^n}$ distinct Boolean functions of $n$ inputs. How many ofthese are representable by a threshold perceptron?Exercise 27 Consider the following set of examples, each with six inputs and onetarget output:$$begin{array} {|r|r|}hline textbf{Example} &amp; A_1 &amp; A_2 &amp; A_3 &amp; A_4 &amp; A_5 &amp; A_6 &amp; A_7 &amp; A_8 &amp; A_9 &amp; A_{10} &amp; A_{11} &amp; A_{12} &amp; A_{13} &amp; A_{14} hline textbf{x}_1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 textbf{x}_2 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 1 textbf{x}_3 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 1 textbf{x}_4 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 1 textbf{x}_5 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 textbf{x}_6 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 &amp; 0 textbf{T} &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 hline end{array}$$1. Run the perceptron learning rule on these data and show the final weights.2. Run the decision tree learning rule, and show the resulting decision tree.3. Comment on your results.Exercise 28 (perceptron-ML-gradient-exercise) Section logistic-regression-section(page logistic-regression-section) noted that the output of the logistic functioncould be interpreted as a probability $p$ assigned by themodel to the proposition that $f(textbf{x})1$; the probability that$f(textbf{x})0$ is therefore $1-p$. Write down the probability $p$as a function of $textbf{x}$ and calculate the derivative of $log p$ withrespect to each weight $w_i$. Repeat the process for $log (1-p)$. Thesecalculations give a learning rule for minimizing thenegative-log-likelihood loss function for a probabilistic hypothesis.Comment on any resemblance to other learning rules in the chapter.Exercise 29 (linear-nn-exercise) Suppose you had a neural network with linearactivation functions. That is, for each unit the output is some constant$c$ times the weighted sum of the inputs.1. Assume that the network has one hidden layer. For a given assignment to the weights $textbf{w}$, write down equations for the value of the units in the output layer as a function of $textbf{w}$ and the input layer $textbf{x}$, without any explicit mention of the output of the hidden layer. Show that there is a network with no hidden units that computes the same function.2. Repeat the calculation in part (a), but this time do it for a network with any number of hidden layers.3. Suppose a network with one hidden layer and linear activation functions has $n$ input and output nodes and $h$ hidden nodes. What effect does the transformation in part (a) to a network with no hidden layers have on the total number of weights? Discuss in particular the case $h ll n$.Exercise 30 Implement a data structure for layered, feed-forward neural networks,remembering to provide the information needed for both forwardevaluation and backward propagation. Using this data structure, write afunction NEURAL-NETWORK-OUTPUT that takes an example and a network and computes theappropriate output values.Exercise 31 Suppose that a training set contains only a single example, repeated 100times. In 80 of the 100 cases, the single output value is 1; in theother 20, it is 0. What will a back-propagation network predict for thisexample, assuming that it has been trained and reaches a global optimum?(Hint: to find the global optimum, differentiate theerror function and set it to zero.)Exercise 32 The neural network whose learning performance is measured inFigure restaurant-back-prop-figure has four hiddennodes. This number was chosen somewhat arbitrarily. Use across-validation method to find the best number of hidden nodes.Exercise 33 (embedding-separability-exercise) Consider the problem of separating$N$ data points into positive and negative examples using a linearseparator. Clearly, this can always be done for $N2$ pointson a line of dimension $d1$, regardless of how the points arelabeled or where they are located (unless the points are in the sameplace).1. Show that it can always be done for $N3$ points on a plane of dimension $d2$, unless they are collinear.2. Show that it cannot always be done for $N4$ points on a plane of dimension $d2$.3. Show that it can always be done for $N4$ points in a space of dimension $d3$, unless they are coplanar.4. Show that it cannot always be done for $N5$ points in a space of dimension $d3$.5. The ambitious student may wish to prove that $N$ points in general position (but not $N+1$) are linearly separable in a space of dimension $N-1$.Exercise 1 (dbsig-exercise) Show, by translating into conjunctive normal form andapplying resolution, that the conclusion drawn on page dbsig-pageconcerning Brazilians is sound.Exercise 2 For each of the following determinations, write down the logicalrepresentation and explain why the determination is true (if it is):1. Design and denomination determine the mass of a coin.2. For a given program, input determines output.3. Climate, food intake, exercise, and metabolism determine weight gain and loss.4. Baldness is determined by the baldness (or lack thereof) of one’s maternal grandfather. Exercise 3 For each of the following determinations, write down the logicalrepresentation and explain why the determination is true (if it is):1. Zip code determines the state (U.S.).2. Design and denomination determine the mass of a coin.3. Climate, food intake, exercise, and metabolism determine weight gain and loss.4. Baldness is determined by the baldness (or lack thereof) of one’s maternal grandfather.Exercise 4 Would a probabilistic version of determinations be useful? Suggest adefinition.Exercise 5 (ir-step-exercise) Fill in the missing values for the clauses $C_1$ or$C_2$ (or both) in the following sets of clauses, given that $C$ is theresolvent of $C_1$ and $C_2$:1. $C = {True} Rightarrow P(A,B)$, $C_1 = P(x,y) Rightarrow Q(x,y)$, $C_2 = ??$.2. $C = {True} Rightarrow P(A,B)$, $C_1 = ??$, $C_2 = ??$.3. $C = P(x,y) Rightarrow P(x,f(y))$, $C_1 = ??$, $C_2 = ??$.If there is more than one possible solution, provide one example of eachdifferent kind.Exercise 6 (prolog-ir-exercise) Suppose one writes a logic program that carriesout a resolution inference step. That is, let ${Resolve}(c_1,c_2,c)$succeed if $c$ is the result of resolving $c_1$ and $c_2$. Normally,${Resolve}$ would be used as part of a theorem prover by calling itwith $c_1$ and $c_2$ instantiated to particular clauses, therebygenerating the resolvent $c$. Now suppose instead that we call it with$c$ instantiated and $c_1$ and $c_2$ uninstantiated. Will this succeedin generating the appropriate results of an inverse resolution step?Would you need any special modifications to the logic programming systemfor this to work?Exercise 7 (foil-literals-exercise) Suppose that is considering adding a literalto a clause using a binary predicate $P$ and that previous literals(including the head of the clause) contain five different variables.1. How many functionally different literals can be generated? Two literals are functionally identical if they differ only in the names of the *new* variables that they contain.2. Can you find a general formula for the number of different literals with a predicate of arity $r$ when there are $n$ variables previously used?3. Why does not allow literals that contain no previously used variables?Exercise 8 Using the data from the family tree inFigure family2-figure, or a subset thereof, apply thealgorithm to learn a definition for the ${Ancestor}$ predicate.Exercise 1 (bayes-candy-exercise) The data used forFigure bayes-candy-figure on page bayes-candy-figure can beviewed as being generated by $h_5$. For each of the other fourhypotheses, generate a data set of length 100 and plot the correspondinggraphs for $P(h_i|d_1,ldots,d_N)$ and$P(D_{N+1}=lime|d_1,ldots,d_N)$. Comment onyour results.Exercise 2 Repeat Exercise bayes-candy-exercise, this timeplotting the values of$P(D_{N+1}=lime|h_{MAP})$ and$P(D_{N+1}=lime|h_{ML})$.Exercise 3 (candy-trade-exercise) Suppose that Ann’s utilities for cherry andlime candies are $c_A$ and $ell_A$, whereas Bob’s utilities are $c_B$and $ell_B$. (But once Ann has unwrapped a piece of candy, Bob won’tbuy it.) Presumably, if Bob likes lime candies much more than Ann, itwould be wise for Ann to sell her bag of candies once she issufficiently sure of its lime content. On the other hand, if Ann unwrapstoo many candies in the process, the bag will be worth less. Discuss theproblem of determining the optimal point at which to sell the bag.Determine the expected utility of the optimal procedure, given the priordistribution from Section statistical-learning-section.Exercise 4 Two statisticians go to the doctor and are both given the sameprognosis: A 40% chance that the problem is the deadly disease $A$, anda 60% chance of the fatal disease $B$. Fortunately, there are anti-$A$and anti-$B$ drugs that are inexpensive, 100% effective, and free ofside-effects. The statisticians have the choice of taking one drug,both, or neither. What will the first statistician (an avid Bayesian)do? How about the second statistician, who always uses the maximumlikelihood hypothesis?The doctor does some research and discovers that disease $B$ actuallycomes in two versions, dextro-$B$ and levo-$B$, which are equally likelyand equally treatable by the anti-$B$ drug. Now that there are threehypotheses, what will the two statisticians do?Exercise 5 (BNB-exercise) Explain how to apply the boosting method ofChapter concept-learning-chapter to naive Bayeslearning. Test the performance of the resulting algorithm on therestaurant learning problem.Exercise 6 (linear-regression-exercise) Consider $N$ data points $(x_j,y_j)$,where the $y_j$s are generated from the $x_j$s according to the linearGaussian model inEquation (linear-gaussian-likelihood-equation). Findthe values of $theta_1$, $theta_2$, and $sigma$ that maximize theconditional log likelihood of the data.Exercise 7 (noisy-OR-ML-exercise) Consider the noisy-OR model for fever describedin Section canonical-distribution-section. Explain howto apply maximum-likelihood learning to fit the parameters of such amodel to a set of complete data. (Hint: use the chainrule for partial derivatives.)Exercise 8 (beta-integration-exercise) This exercise investigates properties ofthe Beta distribution defined inEquation (beta-equation).1. By integrating over the range $[0,1]$, show that the normalization constant for the distribution $[a,b]$ is given by $alpha = Gamma(a+b)/Gamma(a)Gamma(b)$ where $Gamma(x)$ is the Gamma function, defined by $Gamma(x+1)xcdotGamma(x)$ and $Gamma(1)1$. (For integer $x$, $Gamma(x+1)x!$.)2. Show that the mean is $a/(a+b)$.3. Find the mode(s) (the most likely value(s) of $theta$).4. Describe the distribution $[epsilon,epsilon]$ for very small $epsilon$. What happens as such a distribution is updated?Exercise 9 (ML-parents-exercise) Consider an arbitrary Bayesian network, acomplete data set for that network, and the likelihood for the data setaccording to the network. Give a simple proof that the likelihood of thedata cannot decrease if we add a new link to the network and recomputethe maximum-likelihood parameter values.Exercise 10 Consider a single Boolean random variable $Y$ (the “classification”).Let the prior probability $P(Y=true)$ be $pi$. Let’s try tofind $pi$, given a training set $D=(y_1,ldots,y_N)$ with $N$independent samples of $Y$. Furthermore, suppose $p$ of the $N$ arepositive and $n$ of the $N$ are negative.1. Write down an expression for the likelihood of $D$ (i.e., the probability of seeing this particular sequence of examples, given a fixed value of $pi$) in terms of $pi$, $p$, and $n$.2. By differentiating the log likelihood $L$, find the value of $pi$ that maximizes the likelihood.3. Now suppose we add in $k$ Boolean random variables $X_1, X_2,ldots,X_k$ (the “attributes”) that describe each sample, and suppose we assume that the attributes are conditionally independent of each other given the goal $Y$. Draw the Bayes net corresponding to this assumption.4. Write down the likelihood for the data including the attributes, using the following additional notation: - $alpha_i$ is $P(X_i=true | Y=true)$. - $beta_i$ is $P(X_i=true | Y=false)$. - $p_i^+$ is the count of samples for which $X_i=true$ and $Y=true$. - $n_i^+$ is the count of samples for which $X_i=false$ and $Y=true$. - $p_i^-$ is the count of samples for which $X_i=true$ and $Y=false$. - $n_i^-$ is the count of samples for which $X_i=false$ and $Y=false$. [Hint: consider first the probability of seeing a single example with specified values for $X_1, X_2,ldots,X_k$ and $Y$.]5. By differentiating the log likelihood $L$, find the values of $alpha_i$ and $beta_i$ (in terms of the various counts) that maximize the likelihood and say in words what these values represent.6. Let $k = 2$, and consider a data set with 4 all four possible examples of thexor function. Compute the maximum likelihood estimates of $pi$, $alpha_1$, $alpha_2$, $beta_1$, and $beta_2$.7. Given these estimates of $pi$, $alpha_1$, $alpha_2$, $beta_1$, and $beta_2$, what are the posterior probabilities $P(Y=true | x_1,x_2)$ for each example?Exercise 11 Consider the application of EM to learn the parameters for the networkin Figure mixture-networks-figure(a), given the trueparameters in Equation (candy-true-equation).1. Explain why the EM algorithm would not work if there were just two attributes in the model rather than three.2. Show the calculations for the first iteration of EM starting from Equation (candy-64-equation).3. What happens if we start with all the parameters set to the same value $p$? (Hint: you may find it helpful to investigate this empirically before deriving the general result.)4. Write out an expression for the log likelihood of the tabulated candy data on page candy-counts-page in terms of the parameters, calculate the partial derivatives with respect to each parameter, and investigate the nature of the fixed point reached in part (c).Exercise 1 Implement a passive learning agent in a simple environment, such as the$4times 3$ world. For the case of an initially unknown environmentmodel, compare the learning performance of the direct utilityestimation, TD, and ADP algorithms. Do the comparison for the optimalpolicy and for several random policies. For which do the utilityestimates converge faster? What happens when the size of the environmentis increased? (Try environments with and without obstacles.)Exercise 2 Chapter complex-decisions-chapter defined aproper policy for an MDP as one that isguaranteed to reach a terminal state. Show that it is possible for apassive ADP agent to learn a transition model for which its policy $pi$is improper even if $pi$ is proper for the true MDP; with such models,the POLICY-EVALUATION step may fail if $gamma1$. Show that this problem cannotarise if POLICY-EVALUATION is applied to the learned model only at the end of a trial.Exercise 3 (prioritized-sweeping-exercise) Starting with the passive ADP agent,modify it to use an approximate ADP algorithm as discussed in the text.Do this in two steps:1. Implement a priority queue for adjustments to the utility estimates. Whenever a state is adjusted, all of its predecessors also become candidates for adjustment and should be added to the queue. The queue is initialized with the state from which the most recent transition took place. Allow only a fixed number of adjustments.2. Experiment with various heuristics for ordering the priority queue, examining their effect on learning rates and computation time.Exercise 4 The direct utility estimation method inSection passive-rl-section uses distinguished terminalstates to indicate the end of a trial. How could it be modified forenvironments with discounted rewards and no terminal states?Exercise 5 Write out the parameter update equations for TD learning with$$hat{U}(x,y) = theta_0 + theta_1 x + theta_2 y + theta_3,sqrt{(x-x_g)^2 + (y-y_g)^2} .$$Exercise 6 Adapt the vacuum world (Chapter agents-chapter forreinforcement learning by including rewards for squares being clean.Make the world observable by providing suitable percepts. Now experimentwith different reinforcement learning agents. Is function approximationnecessary for success? What sort of approximator works for thisapplication?Exercise 7 (approx-LMS-exercise) Implement an exploring reinforcement learningagent that uses direct utility estimation. Make two versions—one with atabular representation and one using the function approximator inEquation (4x3-linear-approx-equation). Compare theirperformance in three environments:1. The $4times 3$ world described in the chapter.2. A ${10}times {10}$ world with no obstacles and a +1 reward at (10,10).3. A ${10}times {10}$ world with no obstacles and a +1 reward at (5,5).Exercise 8 Devise suitable features for reinforcement learning in stochastic gridworlds (generalizations of the $4times 3$ world) that contain multipleobstacles and multiple terminal states with rewards of $+1$ or $-1$.Exercise 9 Extend the standard game-playing environment(Chapter game-playing-chapter) to incorporate a rewardsignal. Put two reinforcement learning agents into the environment (theymay, of course, share the agent program) and have them play against eachother. Apply the generalized TD update rule(Equation (generalized-td-equation)) to update theevaluation function. You might wish to start with a simple linearweighted evaluation function and a simple game, such as tic-tac-toe.Exercise 10 (10x10-exercise) Compute the true utility function and the best linearapproximation in $x$ and $y$ (as inEquation (4x3-linear-approx-equation)) for thefollowing environments:1. A ${10}times {10}$ world with a single $+1$ terminal state at (10,10).2. As in (a), but add a $-1$ terminal state at (10,1).3. As in (b), but add obstacles in 10 randomly selected squares.4. As in (b), but place a wall stretching from (5,2) to (5,9).5. As in (a), but with the terminal state at (5,5).The actions are deterministic moves in the four directions. In eachcase, compare the results using three-dimensional plots. For eachenvironment, propose additional features (besides $x$ and $y$) thatwould improve the approximation and show the results.Exercise 11 Implement the REINFORCE and PEGASUS algorithms and apply them to the $4times 3$ world,using a policy family of your own choosing. Comment on the results.Exercise 12 Investigate the application of reinforcement learning ideas to themodeling of human and animal behavior.Exercise 13 Is reinforcement learning an appropriate abstract model for evolution?What connection exists, if any, between hardwired reward signals andevolutionary fitness?Exercise 1 This exercise explores the quality of the $n$-gram model of language.Find or create a monolingual corpus of 100,000 words or more. Segment itinto words, and compute the frequency of each word. How many distinctwords are there? Also count frequencies of bigrams (two consecutivewords) and trigrams (three consecutive words). Now use those frequenciesto generate language: from the unigram, bigram, and trigram models, inturn, generate a 100-word text by making random choices according to thefrequency counts. Compare the three generated texts with actuallanguage. Finally, calculate the perplexity of each model.Exercise 2 Write a program to do segmentation ofwords without spaces. Given a string, such as the URL“thelongestlistofthelongeststuffatthelongestdomainnameatlonglast.com,”return a list of component words: [“the,” “longest,” “list,”$ldots$]. This task is useful for parsing URLs, for spellingcorrection when words runtogether, and for languages such as Chinesethat do not have spaces between words. It can be solved with a unigramor bigram word model and a dynamic programming algorithm similar to theViterbi algorithm.Exercise 3 Zipf’s law of word distribution states the following:Take a large corpus of text, count the frequency of every word in thecorpus, and then rank these frequencies in decreasing order. Let $f_{I}$be the $I$th largest frequency in this list; that is, $f_{1}$ is thefrequency of the most common word (usually “the”), $f_{2}$ is thefrequency of the second most common word, and so on. Zipf’s law statesthat $f_{I}$ is approximately equal to $alpha / I$ for some constant$alpha$. The law tends to be highly accurate except for very small andvery large values of $I$.Exercise 4 Choose a corpus of at least 20,000 words of online text, and verifyZipf’s law experimentally. Define an error measure and find the value of$alpha$ where Zipf’s law best matches your experimental data. Create alog–log graph plotting $f_{I}$ vs. $I$ and $alpha/I$ vs. $I$. (On alog–log graph, the function $alpha/I$ is a straight line.) In carryingout the experiment, be sure to eliminate any formatting tokens (e.g.,HTML tags) and normalize upper and lower case.Exercise 5 (Adapted from Jurafsky+Martin:2000.) In this exercise you will develop a classifier forauthorship: given a text, the classifier predicts which of two candidateauthors wrote the text. Obtain samples of text from two differentauthors. Separate them into training and test sets. Now train a languagemodel on the training set. You can choose what features to use;$n$-grams of words or letters are the easiest, but you can addadditional features that you think may help. Then compute theprobability of the text under each language model and chose the mostprobable model. Assess the accuracy of this technique. How does accuracychange as you alter the set of features? This subfield of linguistics iscalled stylometry; its successes include the identification of the author of thedisputed Federalist Papers Mosteller+Wallace:1964 andsome disputed works of Shakespeare Hope:1994. Khmelev+Tweedie:2001 produce good results witha simple letter bigram model.Exercise 6 This exercise concerns the classification of spam email.Create a corpus of spam email and one of non-spam mail. Examine eachcorpus and decide what features appear to be useful for classification:unigram words? bigrams? message length, sender, time of arrival? Thentrain a classification algorithm (decision tree, naive Bayes, SVM,logistic regression, or some other algorithm of your choosing) on atraining set and report its accuracy on a test set.Exercise 7 Create a test set of ten queries, and pose them to three major Websearch engines. Evaluate each one for precision at 1, 3, and 10documents. Can you explain the differences between engines?Exercise 8 Try to ascertain which of the search engines from the previous exerciseare using case folding, stemming, synonyms, and spelling correction.Exercise 9 Estimate how much storage space is necessary for the index to a 100billion-page corpus of Web pages. Show the assumptions you made.Exercise 10 Write a regular expression or a short program to extract company names.Test it on a corpus of business news articles. Report your recall andprecision.Exercise 11 Consider the problem of trying to evaluate the quality of an IR systemthat returns a ranked list of answers (like most Web search engines).The appropriate measure of quality depends on the presumed model of whatthe searcher is trying to achieve, and what strategy she employs. Foreach of the following models, propose a corresponding numeric measure.1. The searcher will look at the first twenty answers returned, with the objective of getting as much relevant information as possible.2. The searcher needs only one relevant document, and will go down the list until she finds the first one.3. The searcher has a fairly narrow query and is able to examine all the answers retrieved. She wants to be sure that she has seen everything in the document collection that is relevant to her query. (E.g., a lawyer wants to be sure that she has found all relevant precedents, and is willing to spend considerable resources on that.)4. The searcher needs just one document relevant to the query, and can afford to pay a research assistant for an hour’s work looking through the results. The assistant can look through 100 retrieved documents in an hour. The assistant will charge the searcher for the full hour regardless of whether he finds it immediately or at the end of the hour.5. The searcher will look through all the answers. Examining a document has cost $ A; finding a relevant document has value $ B; failing to find a relevant document has cost $ C for each relevant document not found.6. The searcher wants to collect as many relevant documents as possible, but needs steady encouragement. She looks through the documents in order. If the documents she has looked at so far are mostly good, she will continue; otherwise, she will stop.Exercise 1 (washing-clothes-exercise) Read the following text once forunderstanding, and remember as much of it as you can. There will be atest later.&gt; The procedure is actually quite simple. First you arrange things intodifferent groups. Of course, one pile may be sufficient depending on howmuch there is to do. If you have to go somewhere else due to lack offacilities that is the next step, otherwise you are pretty well set. Itis important not to overdo things. That is, it is better to do too fewthings at once than too many. In the short run this may not seemimportant but complications can easily arise. A mistake is expensive aswell. At first the whole procedure will seem complicated. Soon, however,it will become just another facet of life. It is difficult to foreseeany end to the necessity for this task in the immediate future, but thenone can never tell. After the procedure is completed one arranges thematerial into different groups again. Then they can be put into theirappropriate places. Eventually they will be used once more and the wholecycle will have to be repeated. However, this is part of life.Exercise 2 An HMM grammar is essentially a standard HMM whose statevariable is $N$ (nonterminal, with values such as $Det$, $Adjective$,$Noun$ and so on) and whose evidence variable is $W$ (word, with valuessuch as $is$, $duck$, and so on). The HMM model includes a prior${textbf{P}}(N_0)$, a transition model${textbf{P}}(N_{t+1}|N_t)$, and a sensor model${textbf{P}}(W_t|N_t)$. Show that every HMM grammar can bewritten as a PCFG. [Hint: start by thinking about how the HMM prior canbe represented by PCFG rules for the sentence symbol. You may find ithelpful to illustrate for the particular HMM with values $A$, $B$ for$N$ and values $x$, $y$ for $W$.]Exercise 3 Consider the following PCFG for simple verb phrases:&gt; 0.1: VP $rightarrow$ Verb&gt; 0.2: VP $rightarrow$ Copula Adjective&gt; 0.5: VP $rightarrow$ Verb the Noun&gt; 0.2: VP $rightarrow$ VP Adverb&gt; 0.5: Verb $rightarrow$ is&gt; 0.5: Verb $rightarrow$ shoots&gt; 0.8: Copula $rightarrow$ is&gt; 0.2: Copula $rightarrow$ seems&gt; 0.5: Adjective $rightarrow$ unwell&gt; 0.5: Adjective $rightarrow$ well&gt; 0.5: Adverb $rightarrow$ well&gt; 0.5: Adverb $rightarrow$ badly&gt; 0.6: Noun $rightarrow$ duck&gt; 0.4: Noun $rightarrow$ well1. Which of the following have a nonzero probability as a VP? (i) shoots the duck well well well(ii) seems the well well(iii) shoots the unwell well badly2. What is the probability of generating “is well well”?3. What types of ambiguity are exhibited by the phrase in (b)?4. Given any PCFG, is it possible to calculate the probability that the PCFG generates a string of exactly 10 words?Exercise 4 Consider the following simple PCFG for noun phrases:&gt; 0.6: NP $rightarrow$ Det AdjString Noun&gt; 0.4: NP $rightarrow$ Det NounNounCompound&gt; 0.5: AdjString $rightarrow$ Adj AdjString&gt; 0.5: AdjString $rightarrow$ $Lambda$&gt; 1.0: NounNounCompound $rightarrow$ Noun&gt; 0.8: Det $rightarrow$ the&gt; 0.2: Det $rightarrow$ a&gt; 0.5: Adj $rightarrow$ small&gt; 0.5: Adj $rightarrow$ green&gt; 0.6: Noun $rightarrow$ village&gt; 0.4: Noun $rightarrow$ greenwhere $Lambda$ denotes the empty string.1. What is the longest NP that can be generated by this grammar? (i) three words(ii) four words(iii) infinitely many words2. Which of the following have a nonzero probability of being generated as complete NPs? (i) a small green village(ii) a green green green(iii) a small village green3. What is the probability of generating “the green green”?4. What types of ambiguity are exhibited by the phrase in (c)?5. Given any PCFG and any finite word sequence, is it possible to calculate the probability that the sequence was generated by the PCFG?Exercise 5 Outline the major differences between Java (or any other computerlanguage with which you are familiar) and English, commenting on the“understanding” problem in each case. Think about such things asgrammar, syntax, semantics, pragmatics, compositionality,context-dependence, lexical ambiguity, syntactic ambiguity, referencefinding (including pronouns), background knowledge, and what it means to“understand” in the first place.Exercise 6 This exercise concerns grammars for very simple languages.1. Write a context-free grammar for the language $a^n b^n$.2. Write a context-free grammar for the palindrome language: the set of all strings whose second half is the reverse of the first half.3. Write a context-sensitive grammar for the duplicate language: the set of all strings whose second half is the same as the first half.Exercise 7 Consider the sentence “Someone walked slowly to the supermarket” and alexicon consisting of the following words:$Pronoun rightarrow textbf{someone} quad Verb rightarrow textbf{walked}$$Adv rightarrow textbf{slowly} quad Prep rightarrow textbf{to}$$Article rightarrow textbf{the} quad Noun rightarrow textbf{supermarket}$Which of the following three grammars, combined with the lexicon,generates the given sentence? Show the corresponding parse tree(s).$$quadquadquadquad (A):quadquadquadquad quadquadquadquad(B):quadquadquadquad quadquadquadquad(C):quadquadquadquad S rightarrow NP space VP quadquadquadquad quadquadquadquad Srightarrow NPspace VP quadquadquadquad Srightarrow NPspace VPquadquadquadquad NPrightarrow Pronoun quadquadquadquad NPrightarrow Pronoun quadquadquadquad NPrightarrow Pronounquadquadquadquad NPrightarrow Articlespace Noun quadquadquadquad NPrightarrow Noun quadquadquadquad NPrightarrow Articlespace NPquadquadquadquad VPrightarrow VPspace PP quadquadquadquad NPrightarrow Articlespace NP quadquadquadquad VPrightarrow Verbspace Advquadquadquadquad VPrightarrow VPspace Advspace Adv quadquadquadquad VPrightarrow Verbspace Vmod quadquadquadquad Advrightarrow Advspace Advquadquadquadquad VPrightarrow Verb quadquadquadquad Vmodrightarrow Advspace Vmod quadquadquadquad Advrightarrow PPquadquadquadquad PPrightarrow Prepspace NP quadquadquadquad Vmodrightarrow Adv quadquadquadquad PPrightarrow Prepspace NPquadquadquadquad NPrightarrow Noun quadquadquadquad Advrightarrow PP quadquadquadquad NPrightarrow Nounquadquadquadquadquad quadquadquadquad PPrightarrow Prepspace NP quadquadquadquad quadquadquadquad$$For each of the preceding three grammars, write down three sentences ofEnglish and three sentences of non-English generated by the grammar.Each sentence should be significantly different, should be at least sixwords long, and should include some new lexical entries (which youshould define). Suggest ways to improve each grammar to avoid generatingthe non-English sentences.Exercise 8 Collect some examples of time expressions, such as “two o’clock,”“midnight,” and “12:46.” Also think up some examples that areungrammatical, such as “thirteen o’clock” or “half past two fifteen.”Write a grammar for the time language.Exercise 9 Some linguists have argued as follows: Children learning a language hear only positive examples of the language and no negative examples. Therefore, the hypothesis that “every possible sentence is in the language” is consistent with all the observed examples. Moreover, this is the simplest consistent hypothesis. Furthermore, all grammars for languages that are supersets of the true language are also consistent with the observed data. Yet children do induce (more or less) the right grammar. It follows that they begin with very strong innate grammatical constraints that rule out all of these more general hypotheses a priori.Comment on the weak point(s) in this argument from a statisticallearning viewpoint.Exercise 10 (chomsky-form-exercise) In this exercise you will transform $large varepsilon_0$ intoChomsky Normal Form (CNF). There are five steps: (a) Add a new startsymbol, (b) Eliminate $epsilon$ rules, (c) Eliminate multiple words onright-hand sides, (d) Eliminate rules of the form(${it X} rightarrow$${it Y}$),(e) Convert long right-hand sides into binary rules.1. The start symbol, $S$, can occur only on the left-hand side in CNF. Replace ${it S}$ everywhere by a new symbol ${it S'}$ and add a rule of the form ${it S}$ $rightarrow$${it S'}$.2. The empty string, $epsilon$ cannot appear on the right-hand side in CNF. $large varepsilon_0$ does not have any rules with $epsilon$, so this is not an issue.3. A word can appear on the right-hand side in a rule only of the form (${it X}$ $rightarrow$word). Replace each rule of the form (${it X}$ $rightarrow$…word …) with (${it X}$ $rightarrow$…${it W'}$ …) and (${it W'}$ $rightarrow$word), using a new symbol ${it W'}$.4. A rule (${it X}$ $rightarrow$${it Y}$) is not allowed in CNF; it must be (${it X}$ $rightarrow$${it Y}$ ${it Z}$) or (${it X}$ $rightarrow$word). Replace each rule of the form (${it X}$ $rightarrow$${it Y}$) with a set of rules of the form (${it X}$ $rightarrow$…), one for each rule (${it Y}$ $rightarrow$…), where (…) indicates one or more symbols.5. Replace each rule of the form (${it X}$ $rightarrow$${it Y}$ ${it Z}$ …) with two rules, (${it X}$ $rightarrow$${it Y}$ ${it Z'}$) and (${it Z'}$ $rightarrow$${it Z}$ …), where ${it Z'}$ is a new symbol.Show each step of the process and the final set of rules.Exercise 11 Consider the following toy grammar:&gt; $S rightarrow NPspace VP$&gt; $NP rightarrow Noun$&gt; $NP rightarrow NPspace andspace NP$&gt; $NP rightarrow NPspace PP$&gt; $VP rightarrow Verb$&gt; $VP rightarrow VPspace and space VP$&gt; $VP rightarrow VPspace PP$&gt; $PP rightarrow Prepspace NP$&gt; $Noun rightarrow Sallyspace; poolsspace; streamsspace; swims$&gt; $Prep rightarrow in$&gt; $Verb rightarrow poolsspace; streamsspace; swims$1. Show all the parse trees in this grammar for the sentence “Sally swims in streams and pools.”2. Show all the table entries that would be made by a (non-probabalistic) CYK parser on this sentence.Exercise 12(exercise-subj-verb-agree) Using DCG notation, write a grammar for alanguage that is just like $large varepsilon_1$, except that it enforces agreement betweenthe subject and verb of a sentence and thus does not generateungrammatical sentences such as “I smells the wumpus.”Exercise 13 Consider the following PCFG:&gt; $S rightarrow NP space VP[1.0] $&gt; $NP rightarrow textit{Noun}[0.6] space|space textit{Pronoun}[0.4] $&gt; $VP rightarrow textit{Verb} space NP[0.8] space|space textit{Modal}space textit{Verb}[0.2]$&gt; $textit{Noun} rightarrow textbf{can}[0.1] space|space textbf{fish}[0.3] space|space ...$&gt; $textit{Pronoun} rightarrow textbf{I}[0.4] space|space ...$&gt; $textit{Verb} rightarrow textbf{can}[0.01] space|space textbf{fish}[0.1] space|space ...$&gt; $textit{Modal} rightarrow textbf{can}[0.3] space|space ...$The sentence “I can fish” has two parse trees with this grammar. Showthe two trees, their prior probabilities, and their conditionalprobabilities, given the sentence.Exercise 14 An augmented context-free grammar can represent languages that a regularcontext-free grammar cannot. Show an augmented context-free grammar forthe language $a^nb^nc^n$. The allowable values for augmentationvariables are 1 and $SUCCESSOR(n)$, where $n$ is a value. The rule for a sentencein this language is$$S(n) rightarrow A(n) B(n) C(n) .$$Show the rule(s) for each of ${it A}$,${it B}$, and ${it C}$.Exercise 15 Augment the $large varepsilon_1$ grammar so that it handles article–noun agreement. That is,make sure that “agents” and “an agent” are ${it NP}$s, but“agent” and “an agents” are not.Exercise 16 Consider the following sentence (from The New York Times,July 28, 2008):&gt; Banks struggling to recover from multibillion-dollar loans on real&gt; estate are curtailing loans to American businesses, depriving even&gt; healthy companies of money for expansion and hiring.1. Which of the words in this sentence are lexically ambiguous?2. Find two cases of syntactic ambiguity in this sentence (there are more than two.)3. Give an instance of metaphor in this sentence.4. Can you find semantic ambiguity?Exercise 17 (washing-clothes2-exercise) Without looking back atExercise washing-clothes-exercise, answer the followingquestions:1. What are the four steps that are mentioned?2. What step is left out?3. What is “the material” that is mentioned in the text?4. What kind of mistake would be expensive?5. Is it better to do too few things or too many? Why?Exercise 18 Select five sentences and submit them to an online translation service.Translate them from English to another language and back to English.Rate the resulting sentences for grammaticality and preservation ofmeaning. Repeat the process; does the second round of iteration giveworse results or the same results? Does the choice of intermediatelanguage make a difference to the quality of the results? If you know aforeign language, look at the translation of one paragraph into thatlanguage. Count and describe the errors made, and conjecture why theseerrors were made.Exercise 19 The $D_i$ values for the sentence inFigure mt-alignment-figure sum to 0. Will that be trueof every translation pair? Prove it or give a counterexample.Exercise 20 (Adapted from [Knight:1999].) Our translation model assumes that, after the phrasetranslation model selects phrases and the distortion model permutesthem, the language model can unscramble the permutation. This exerciseinvestigates how sensible that assumption is. Try to unscramble theseproposed lists of phrases into the correct order:1. have, programming, a, seen, never, I, language, better2. loves, john, mary3. is the, communication, exchange of, intentional, information brought, by, about, the production, perception of, and signs, from, drawn, a, of, system, signs, conventional, shared4. created, that, we hold these, to be, all men, truths, are, equal, self-evidentWhich ones could you do? What type of knowledge did you draw upon? Traina bigram model from a training corpus, and use it to find thehighest-probability permutation of some sentences from a test corpus.Report on the accuracy of this model.Exercise 21 Calculate the most probable path through the HMM inFigure sr-hmm-figure for the output sequence$[C_1,C_2,C_3,C_4,C_4,C_6,C_7]$. Also give its probability.Exercise 22 We forgot to mention that the text inExercise washing-clothes-exercise is entitled “WashingClothes.” Reread the text and answer the questions inExercise washing-clothes2-exercise. Did you do betterthis time? Bransford and Johnson [Bransford+Johnson:1973] used thistext in a controlled experiment and found that the title helpedsignificantly. What does this tell you about how language and memoryworks?Exercise 1 In the shadow of a tree with a dense, leafy canopy, one sees a number oflight spots. Surprisingly, they all appear to be circular. Why? Afterall, the gaps between the leaves through which the sun shines are notlikely to be circular.Exercise 2 Consider a picture of a white sphere floating in front of a blackbackdrop. The image curve separating white pixels from black pixels issometimes called the “outline” of the sphere. Show that the outline of asphere, viewed in a perspective camera, can be an ellipse. Why dospheres not look like ellipses to you?Exercise 3 Consider an infinitely long cylinder of radius $r$ oriented with itsaxis along the $y$-axis. The cylinder has a Lambertian surface and isviewed by a camera along the positive $z$-axis. What will you expect tosee in the image if the cylinder is illuminated by a point source atinfinity located on the positive $x$-axis? Draw the contours of constantbrightness in the projected image. Are the contours of equal brightnessuniformly spaced?Exercise 4 Edges in an image can correspond to a variety of events in a scene.Consider Figure illuminationfigure(page illuminationfigure, and assume that it is a picture of a realthree-dimensional scene. Identify ten different brightness edges in theimage, and for each, state whether it corresponds to a discontinuity in(a) depth, (b) surface orientation, (c) reflectance, or (d)illumination.Exercise 5 A stereoscopic system is being contemplated for terrain mapping. It willconsist of two CCD cameras, each having ${512}times {512}$ pixels on a10 cm $times$ 10 cm square sensor. The lenses to be used have a focallength of 16 cm, with the focus fixed at infinity. For correspondingpoints ($u_1,v_1$) in the left image and ($u_2,v_2$) in the right image,$v_1=v_2$ because the $x$-axes in the two image planes are parallel tothe epipolar lines—the lines from the object to the camera. The opticalaxes of the two cameras are parallel. The baseline between the camerasis 1 meter.1. If the nearest distance to be measured is 16 meters, what is the largest disparity that will occur (in pixels)?2. What is the distance resolution at 16 meters, due to the pixel spacing?3. What distance corresponds to a disparity of one pixel?Exercise 6 Which of the following are true, and which are false?1. Finding corresponding points in stereo images is the easiest phase of the stereo depth-finding process.2. Shape-from-texture can be done by projecting a grid of light-stripes onto the scene.3. Lines with equal lengths in the scene always project to equal lengths in the image.4. Straight lines in the image necessarily correspond to straight lines in the scene.Exercise 7 Which of the following are true, and which are false?1. Finding corresponding points in stereo images is the easiest phase of the stereo depth-finding process.2. In stereo views of the same scene, greater accuracy is obtained in the depth calculations if the two camera positions are farther apart.3. Lines with equal lengths in the scene always project to equal lengths in the image.4. Straight lines in the image necessarily correspond to straight lines in the scene. Top view of a two-camera vision system observing a bottle with a wall behind it. Exercise 8 (Courtesy of Pietro Perona.) Figure bottle-figure showstwo cameras at X and Y observing a scene. Draw the image seen at eachcamera, assuming that all named points are in the same horizontal plane.What can be concluded from these two images about the relative distancesof points A, B, C, D, and E from the camera baseline, and on what basis?Exercise 1 (mcl-biasdness-exercise) Monte Carlo localization isbiased for any finite sample size—i.e., the expectedvalue of the location computed by the algorithm differs from the trueexpected value—because of the way particle filtering works. In thisquestion, you are asked to quantify this bias.To simplify, consider a world with four possible robot locations:$X={x_1,x_2,x_3,x_4}$. Initially, wedraw $Ngeq $ samples uniformly from among those locations. Asusual, it is perfectly acceptable if more than one sample is generatedfor any of the locations $X$. Let $Z$ be a Boolean sensor variablecharacterized by the following conditional probabilities:$$begin{aligned}P(z | x_1) = 0.8 qquadqquad P(z | x_1) = 0.2 P(z | x_2) = 0.4 qquadqquad P(z | x_2) = 0.6 P(z | x_3) = 0.1 qquadqquad P(z | x_3) = 0.9 P(z | x_4) = 0.1 qquadqquad P(z | x_4) = 0.9 end{aligned}$$MCL uses these probabilities to generate particle weights, which aresubsequently normalized and used in the resampling process. Forsimplicity, let us assume we generate only one new sample in theresampling process, regardless of $N$. This sample might correspond toany of the four locations in $X$. Thus, the sampling process defines aprobability distribution over $X$.1. What is the resulting probability distribution over $X$ for this new sample? Answer this question separately for $N=1,ldots,10$, and for $N=infty$.2. The difference between two probability distributions $P$ and $Q$ can be measured by the KL divergence, which is defined as $${KL}(P,Q) = sum_i P(x_i)logfrac{P(x_i)}{Q(x_i)} .$$ What are the KL divergences between the distributions in (a) and the true posterior?3. What modification of the problem formulation (not the algorithm!) would guarantee that the specific estimator above is unbiased even for finite values of $N$? Provide at least two such modifications (each of which should be sufficient).Exercise 2 (mcl-implement-exercise)Implement Monte Carlo localization for asimulated robot with range sensors. A grid map and range data areavailable from the code repository ataima.cs.berkeley.edu. You should demonstratesuccessful global localization of the robot. A Robot manipulator in two of its possible configurations.Exercise 3 (AB-manipulator-ex)Consider a robot with two simple manipulators, asshown in figure figRobot2. Manipulator A is a square block of side 2which can slide back and on a rod that runs along the x-axis fromx=$-$10 to x=10. Manipulator B is a square block of side 2 which canslide back and on a rod that runs along the y-axis from y=-10 to y=10.The rods lie outside the plane of manipulation, so the rods do notinterfere with the movement of the blocks. A configuration is then apair ${langle}x,y{rangle}$ where $x$ is the x-coordinate of the centerof manipulator A and where $y$ is the y-coordinate of the center ofmanipulator B. Draw the configuration space for this robot, indicatingthe permitted and excluded zones.Exercise 4 Suppose that you are working with the robot inExercise AB-manipulator-ex and you are given theproblem of finding a path from the starting configuration offigure figRobot2 to the ending configuration. Consider a potentialfunction $$D(A, {Goal})^2 + D(B, {Goal})^2 + frac{1}{D(A, B)^2}$$where $D(A,B)$ is the distance between the closest points of A and B.1. Show that hill climbing in this potential field will get stuck in a local minimum.2. Describe a potential field where hill climbing will solve this particular problem. You need not work out the exact numerical coefficients needed, just the general form of the solution. (Hint: Add a term that “rewards" the hill climber for moving A out of B’s way, even in a case like this where this does not reduce the distance from A to B in the above sense.)Exercise 5 (inverse-kinematics-exercise) Consider the robot arm shown inFigure FigArm1. Assume that the robot’s base element is60cm long and that its upper arm and forearm are each 40cm long. Asargued on page inverse-kinematics-not-unique, the inverse kinematics of a robot is oftennot unique. State an explicit closed-form solution of the inversekinematics for this arm. Under what exact conditions is the solutionunique?Exercise 6 (inverse-kinematics-exercise) Consider the robot arm shown inFigure FigArm1. Assume that the robot’s base element is70cm long and that its upper arm and forearm are each 50cm long. Asargued on page inverse-kinematics-not-unique, the inverse kinematics of a robot is oftennot unique. State an explicit closed-form solution of the inversekinematics for this arm. Under what exact conditions is the solutionunique?Exercise 7 (voronoi-exercise) Implement an algorithm for calculating the Voronoidiagram of an arbitrary 2D environment, described by an $ntimes n$Boolean array. Illustrate your algorithm by plotting the Voronoi diagramfor 10 interesting maps. What is the complexity of your algorithm?Exercise 8 (confspace-exercise) This exercise explores the relationship betweenworkspace and configuration space using the examples shown inFigure FigEx2.1. Consider the robot configurations shown in Figure FigEx2(a) through (c), ignoring the obstacle shown in each of the diagrams. Draw the corresponding arm configurations in configuration space. (Hint: Each arm configuration maps to a single point in configuration space, as illustrated in Figure FigArm1(b).)2. Draw the configuration space for each of the workspace diagrams in Figure FigEx2(a)–(c). (Hint: The configuration spaces share with the one shown in Figure FigEx2(a) the region that corresponds to self-collision, but differences arise from the lack of enclosing obstacles and the different locations of the obstacles in these individual figures.)3. For each of the black dots in Figure FigEx2(e)–(f), draw the corresponding configurations of the robot arm in workspace. Please ignore the shaded regions in this exercise.4. The configuration spaces shown in Figure FigEx2(e)–(f) have all been generated by a single workspace obstacle (dark shading), plus the constraints arising from the self-collision constraint (light shading). Draw, for each diagram, the workspace obstacle that corresponds to the darkly shaded area.5. Figure FigEx2(d) illustrates that a single planar obstacle can decompose the workspace into two disconnected regions. What is the maximum number of disconnected regions that can be created by inserting a planar obstacle into an obstacle-free, connected workspace, for a 2DOF robot? Give an example, and argue why no larger number of disconnected regions can be created. How about a non-planar obstacle? (a) (b) (c) (d) (e) (f) Exercise 9 Consider a mobile robot moving on a horizontal surface. Suppose that therobot can execute two kinds of motions:- Rolling forward a specified distance.- Rotating in place through a specified angle.The state of such a robot can be characterized in terms of threeparameters ${langle}x,y,phi$, the x-coordinate and y-coordinate of therobot (more precisely, of its center of rotation) and the robot’sorientation expressed as the angle from the positive x direction. Theaction “$Roll(D)$” has the effect of changing state ${langle}x,y,phi$to ${langle}x+D cos(phi), y+D sin(phi), phi {rangle}$, and theaction $Rotate(theta)$ has the effect of changing state${langle}x,y,phi {rangle}$ to${langle}x,y, phi + theta {rangle}$.1. Suppose that the robot is initially at ${langle}0,0,0 {rangle}$ and then executes the actions $Rotate(60^{circ})$, $Roll(1)$, $Rotate(25^{circ})$, $Roll(2)$. What is the final state of the robot?2. Now suppose that the robot has imperfect control of its own rotation, and that, if it attempts to rotate by $theta$, it may actually rotate by any angle between $theta-10^{circ}$ and $theta+10^{circ}$. In that case, if the robot attempts to carry out the sequence of actions in (A), there is a range of possible ending states. What are the minimal and maximal values of the x-coordinate, the y-coordinate and the orientation in the final state?3. Let us modify the model in (B) to a probabilistic model in which, when the robot attempts to rotate by $theta$, its actual angle of rotation follows a Gaussian distribution with mean $theta$ and standard deviation $10^{circ}$. Suppose that the robot executes the actions $Rotate(90^{circ})$, $Roll(1)$. Give a simple argument that (a) the expected value of the location at the end is not equal to the result of rotating exactly $90^{circ}$ and then rolling forward 1 unit, and (b) that the distribution of locations at the end does not follow a Gaussian. (Do not attempt to calculate the true mean or the true distribution.) The point of this exercise is that rotational uncertainty quickly gives rise to a lot of positional uncertainty and that dealing with rotational uncertainty is painful, whether uncertainty is treated in terms of hard intervals or probabilistically, due to the fact that the relation between orientation and position is both non-linear and non-monotonic. Simplified robot in a maze. See Exercise robot-exploration-exerciseExercise 10 (robot-exploration-exercise) Consider the simplified robot shown inFigure FigEx3. Suppose the robot’s Cartesiancoordinates are known at all times, as are those of its goal location.However, the locations of the obstacles are unknown. The robot can senseobstacles in its immediate proximity, as illustrated in this figure. Forsimplicity, let us assume the robot’s motion is noise-free, and thestate space is discrete. Figure FigEx3 is only oneexample; in this exercise you are required to address all possible gridworlds with a valid path from the start to the goal location.1. Design a deliberate controller that guarantees that the robot always reaches its goal location if at all possible. The deliberate controller can memorize measurements in the form of a map that is being acquired as the robot moves. Between individual moves, it may spend arbitrary time deliberating.2. Now design a reactive controller for the same task. This controller may not memorize past sensor measurements. (It may not build a map!) Instead, it has to make all decisions based on the current measurement, which includes knowledge of its own location and that of the goal. The time to make a decision must be independent of the environment size or the number of past time steps. What is the maximum number of steps that it may take for your robot to arrive at the goal?3. How will your controllers from (a) and (b) perform if any of the following six conditions apply: continuous state space, noise in perception, noise in motion, noise in both perception and motion, unknown location of the goal (the goal can be detected only when within sensor range), or moving obstacles. For each condition and each controller, give an example of a situation where the robot fails (or explain why it cannot fail).Exercise 11 (subsumption-exercise) In Figure Fig5(b) onpage Fig5, we encountered an augmented finite state machine forthe control of a single leg of a hexapod robot. In this exercise, theaim is to design an AFSM that, when combined with six copies of theindividual leg controllers, results in efficient, stable locomotion. Forthis purpose, you have to augment the individual leg controller to passmessages to your new AFSM and to wait until other messages arrive. Arguewhy your controller is efficient, in that it does not unnecessarilywaste energy (e.g., by sliding legs), and in that it propels the robotat reasonably high speeds. Prove that your controller satisfies thedynamic stability condition given on page polygon-stability-condition-page.Exercise 12 (human-robot-exercise)(This exercise was first devised by MichaelGenesereth and Nils Nilsson. It works for first graders through graduatestudents.) Humans are so adept at basic household tasks that they oftenforget how complex these tasks are. In this exercise you will discoverthe complexity and recapitulate the last 30 years of developments inrobotics. Consider the task of building an arch out of three blocks.Simulate a robot with four humans as follows:Brain. The Brain direct the hands in the execution of aplan to achieve the goal. The Brain receives input from the Eyes, butcannot see the scene directly. The brain is the only onewho knows what the goal is.Eyes. The Eyes report a brief description of the sceneto the Brain: “There is a red box standing on top of a green box, whichis on its side” Eyes can also answer questions from the Brain such as,“Is there a gap between the Left Hand and the red box?” If you have avideo camera, point it at the scene and allow the eyes to look at theviewfinder of the video camera, but not directly at the scene.Left hand and right hand. One personplays each Hand. The two Hands stand next to each other, each wearing anoven mitt on one hand, Hands execute only simple commands from theBrain—for example, “Left Hand, move two inches forward.” They cannotexecute commands other than motions; for example, they cannot becommanded to “Pick up the box.” The Hands must beblindfolded. The only sensory capability they have is theability to tell when their path is blocked by an immovable obstacle suchas a table or the other Hand. In such cases, they can beep to inform theBrain of the difficulty. Exercise 1 Go through Turing’s list of alleged“disabilities” of machines, identifying which have been achieved, whichare achievable in principle by a program, and which are stillproblematic because they require conscious mental states. Exercise 2 Find and analyze an account in the popular media of one or more of thearguments to the effect that AI is impossible. Exercise 3 Attempt to write definitions of the terms “intelligence,” “thinking,”and “consciousness.” Suggest some possible objections to yourdefinitions. Exercise 4 Does a refutation of the Chinese room argument necessarily prove thatappropriately programmed computers have mental states? Does anacceptance of the argument necessarily mean that computers cannot havemental states? Exercise 5 (brain-prosthesis-exercise) In the brain replacement argument, it isimportant to be able to restore the subject’s brain to normal, such thatits external behavior is as it would have been if the operation had nottaken place. Can the skeptic reasonably object that this would requireupdating those neurophysiological properties of the neurons relating toconscious experience, as distinct from those involved in the functionalbehavior of the neurons? Exercise 6 Suppose that a Prolog program containing many clauses about the rules ofBritish citizenship is compiled and run on an ordinary computer. Analyzethe “brain states” of the computer under wide and narrow content. Exercise 7 Alan Perlis [Perlis:1982] wrote, “A year spent in artificialintelligence is enough to make one believe in God”. He also wrote, in aletter to Philip Davis, that one of the central dreams of computerscience is that “through the performance of computers and their programswe will remove all doubt that there is only a chemical distinctionbetween the living and nonliving world.” To what extent does theprogress made so far in artificial intelligence shed light on theseissues? Suppose that at some future date, the AI endeavor has beencompletely successful; that is, we have build intelligent agents capableof carrying out any human cognitive task at human levels of ability. Towhat extent would that shed light on these issues? Exercise 8 Compare the social impact of artificial intelligence in the last fiftyyears with the social impact of the introduction of electric appliancesand the internal combustion engine in the fifty years between 1890 and1940. Exercise 9 I. J. Good claims that intelligence is the most important quality, andthat building ultraintelligent machines will change everything. Asentient cheetah counters that “Actually speed is more important; if wecould build ultrafast machines, that would change everything,” and asentient elephant claims “You’re both wrong; what we need is ultrastrongmachines.” What do you think of these arguments? Exercise 10 Analyze the potential threats from AI technology to society. Whatthreats are most serious, and how might they be combated? How do theycompare to the potential benefits? Exercise 11 How do the potential threats from AI technology compare with those fromother computer science technologies, and to bio-, nano-, and nucleartechnologies? Exercise 12 Some critics object that AI is impossible, while others object that itis too possible and that ultraintelligent machines pose athreat. Which of these objections do you think is more likely? Would itbe a contradiction for someone to hold both positions? ", "url": " /question_bank/" } diff --git a/markdown/13-Quantifying-Uncertainity/exercises/ex_1/question.md b/markdown/13-Quantifying-Uncertainity/exercises/ex_1/question.md index 74f6e6588d..705a1456e1 100644 --- a/markdown/13-Quantifying-Uncertainity/exercises/ex_1/question.md +++ b/markdown/13-Quantifying-Uncertainity/exercises/ex_1/question.md @@ -1,3 +1,3 @@ -Show from first principles that $P(a{{\,|\,}}b\land a) = 1$. +Show from first principles that $P(a $|$ b\land a) = 1$. diff --git a/markdown/13-Quantifying-Uncertainity/exercises/ex_20/question.md b/markdown/13-Quantifying-Uncertainity/exercises/ex_20/question.md index 13e3c5212d..c9058f9036 100644 --- a/markdown/13-Quantifying-Uncertainity/exercises/ex_20/question.md +++ b/markdown/13-Quantifying-Uncertainity/exercises/ex_20/question.md @@ -8,7 +8,7 @@ general versions of the product rule and Bayes’ rule, with respect to some background evidence $\textbf{e}$:
1. Prove the conditionalized version of the general product rule: - $${\textbf{P}}(X,Y {{\,|\,}}\textbf{e}) = {\textbf{P}}(X{{\,|\,}}Y,\textbf{e}) {\textbf{P}}(Y{{\,|\,}}\textbf{e})\ .$$
+ $${\textbf{P}}(X,Y $|$\textbf{e}) = {\textbf{P}}(X$|$Y,\textbf{e}) {\textbf{P}}(Y$|$\textbf{e})\ .$$
2. Prove the conditionalized version of Bayes’ rule in Equation (conditional-bayes-equation).
diff --git a/markdown/13-Quantifying-Uncertainity/exercises/ex_23/question.md b/markdown/13-Quantifying-Uncertainity/exercises/ex_23/question.md index e6f864fe9e..5bf338810c 100644 --- a/markdown/13-Quantifying-Uncertainity/exercises/ex_23/question.md +++ b/markdown/13-Quantifying-Uncertainity/exercises/ex_23/question.md @@ -2,8 +2,8 @@ In this exercise, you will complete the normalization calculation for the meningitis example. First, make up a -suitable value for $P(s{{\,|\,}}\lnot m)$, and use it to calculate -unnormalized values for $P(m{{\,|\,}}s)$ and $P(\lnot m {{\,|\,}}s)$ +suitable value for $P(s$|$\lnot m)$, and use it to calculate +unnormalized values for $P(m$|$s)$ and $P(\lnot m $|$s)$ (i.e., ignoring the $P(s)$ term in the Bayes’ rule expression, Equation (meningitis-bayes-equation). Now normalize these values so that they add to 1. diff --git a/markdown/13-Quantifying-Uncertainity/exercises/ex_24/question.md b/markdown/13-Quantifying-Uncertainity/exercises/ex_24/question.md index 88c7891cda..a2dcd974a1 100644 --- a/markdown/13-Quantifying-Uncertainity/exercises/ex_24/question.md +++ b/markdown/13-Quantifying-Uncertainity/exercises/ex_24/question.md @@ -4,22 +4,22 @@ This exercise investigates the way in which conditional independence relationships affect the amount of information needed for probabilistic calculations.
-1. Suppose we wish to calculate $P(h{{\,|\,}}e_1,e_2)$ and we have no +1. Suppose we wish to calculate $P(h$|$e_1,e_2)$ and we have no conditional independence information. Which of the following sets of numbers are sufficient for the calculation?
1. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1{{\,|\,}}H)$, - ${\textbf{P}}(E_2{{\,|\,}}H)$ + ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$ 2. ${\textbf{P}}(E_1,E_2)$, ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1,E_2{{\,|\,}}H)$
+ ${\textbf{P}}(E_1,E_2$|$H)$
3. ${\textbf{P}}(H)$, - ${\textbf{P}}(E_1{{\,|\,}}H)$, - ${\textbf{P}}(E_2{{\,|\,}}H)$
+ ${\textbf{P}}(E_1$|$H)$, + ${\textbf{P}}(E_2$|$H)$
2. Suppose we know that - ${\textbf{P}}(E_1{{\,|\,}}H,E_2)={\textbf{P}}(E_1{{\,|\,}}H)$ + ${\textbf{P}}(E_1$|$H,E_2)={\textbf{P}}(E_1$|$H)$ for all values of $H$, $E_1$, $E_2$. Now which of the three sets are sufficient? diff --git a/markdown/13-Quantifying-Uncertainity/exercises/ex_27/question.md b/markdown/13-Quantifying-Uncertainity/exercises/ex_27/question.md index 2b535600eb..464028cde9 100644 --- a/markdown/13-Quantifying-Uncertainity/exercises/ex_27/question.md +++ b/markdown/13-Quantifying-Uncertainity/exercises/ex_27/question.md @@ -1,6 +1,6 @@ Write out a general algorithm for answering queries of the form -${\textbf{P}}({Cause}{{\,|\,}}\textbf{e})$, using a naive Bayes +${\textbf{P}}({Cause}$|$\textbf{e})$, using a naive Bayes distribution. Assume that the evidence $\textbf{e}$ may assign values to any subset of the effect variables. diff --git a/markdown/13-Quantifying-Uncertainity/exercises/ex_3/question.md b/markdown/13-Quantifying-Uncertainity/exercises/ex_3/question.md index 17ebccb595..2dbfb8c7c0 100644 --- a/markdown/13-Quantifying-Uncertainity/exercises/ex_3/question.md +++ b/markdown/13-Quantifying-Uncertainity/exercises/ex_3/question.md @@ -3,10 +3,10 @@ For each of the following statements, either prove it is true or give a counterexample.
-1. If $P(a {{\,|\,}}b, c) = P(b {{\,|\,}}a, c)$, then - $P(a {{\,|\,}}c) = P(b {{\,|\,}}c)$
+1. If $P(a $|$b, c) = P(b $|$a, c)$, then + $P(a $|$c) = P(b $|$c)$
-2. If $P(a {{\,|\,}}b, c) = P(a)$, then $P(b {{\,|\,}}c) = P(b)$
+2. If $P(a $|$b, c) = P(a)$, then $P(b $|$c) = P(b)$
-3. If $P(a {{\,|\,}}b) = P(a)$, then - $P(a {{\,|\,}}b, c) = P(a {{\,|\,}}c)$
+3. If $P(a $|$b) = P(a)$, then + $P(a $|$b, c) = P(a $|$c)$
diff --git a/markdown/13-Quantifying-Uncertainity/exercises/ex_8/question.md b/markdown/13-Quantifying-Uncertainity/exercises/ex_8/question.md index 13d3199cb7..d82500041c 100644 --- a/markdown/13-Quantifying-Uncertainity/exercises/ex_8/question.md +++ b/markdown/13-Quantifying-Uncertainity/exercises/ex_8/question.md @@ -7,6 +7,6 @@ Figure LG-network-page
. 1. In a two-variable network, let $X_1$ be the parent of $X_2$, let $X_1$ have a Gaussian prior, and let - ${\textbf{P}}(X_2{{\,|\,}}X_1)$ be a linear + ${\textbf{P}}(X_2$|$X_1)$ be a linear Gaussian distribution. Show that the joint distribution $P(X_1,X_2)$ is a multivariate Gaussian, and calculate its covariance matrix.
diff --git a/markdown/14-Probabilistic-Reasoning/exercises/ex_14/question.md b/markdown/14-Probabilistic-Reasoning/exercises/ex_14/question.md index 2358318aa4..a4cfe127d6 100644 --- a/markdown/14-Probabilistic-Reasoning/exercises/ex_14/question.md +++ b/markdown/14-Probabilistic-Reasoning/exercises/ex_14/question.md @@ -14,7 +14,7 @@ Figure telescop 2. Which is the best network? Explain.
3. Write out a conditional distribution for - ${\textbf{P}}(M_1{{\,|\,}}N)$, for the case where + ${\textbf{P}}(M_1$|$N)$, for the case where $N{{\,\in\\,}}\{1,2,3\}$ and $M_1{{\,\in\\,}}\{0,1,2,3,4\}$. Each entry in the conditional distribution should be expressed as a function of the parameters $e$ and/or $f$.
diff --git a/markdown/14-Probabilistic-Reasoning/exercises/ex_15/question.md b/markdown/14-Probabilistic-Reasoning/exercises/ex_15/question.md index af42f6e383..606670a0f6 100644 --- a/markdown/14-Probabilistic-Reasoning/exercises/ex_15/question.md +++ b/markdown/14-Probabilistic-Reasoning/exercises/ex_15/question.md @@ -7,7 +7,7 @@ $M_1,M_2{{\,\in\\,}}\{0,1,2,3,4\}$, with the symbolic CPTs as described in Exercise 
telescope-exercise. Using the enumeration algorithm (Figure enumeration-algorithm on page enumeration-algorithm), calculate the probability distribution -${\textbf{P}}(N{{\,|\,}}M_1{{\,=\,}}2,M_2{{\,=\,}}2)$.
+${\textbf{P}}(N$|$M_1{{\,=\,}}2,M_2{{\,=\,}}2)$.
diff --git a/markdown/14-Probabilistic-Reasoning/exercises/ex_18/question.md b/markdown/14-Probabilistic-Reasoning/exercises/ex_18/question.md index a4f998aebe..c30e980351 100644 --- a/markdown/14-Probabilistic-Reasoning/exercises/ex_18/question.md +++ b/markdown/14-Probabilistic-Reasoning/exercises/ex_18/question.md @@ -5,7 +5,7 @@ Figure exact-inference-section applies variable elimination to the query - $${\textbf{P}}({Burglary}{{\,|\,}}{JohnCalls}{{\,=\,}}{true},{MaryCalls}{{\,=\,}}{true})\ .$$ + $${\textbf{P}}({Burglary}$|${JohnCalls}{{\,=\,}}{true},{MaryCalls}{{\,=\,}}{true})\ .$$ Perform the calculations indicated and check that the answer is correct.
@@ -16,7 +16,7 @@ Figure rain-clustering-figure(a) (page rain-clustering-figure) and how Gibbs sampling can answer it.
diff --git a/markdown/14-Probabilistic-Reasoning/exercises/ex_23/question.md b/markdown/14-Probabilistic-Reasoning/exercises/ex_23/question.md index d06e0c65ef..6d56248ea6 100644 --- a/markdown/14-Probabilistic-Reasoning/exercises/ex_23/question.md +++ b/markdown/14-Probabilistic-Reasoning/exercises/ex_23/question.md @@ -3,11 +3,11 @@ The Metropolis--Hastings algorithm is a member of the MCMC family; as such, it is designed to generate samples $\textbf{x}$ (eventually) according to target probabilities $\pi(\textbf{x})$. (Typically we are interested in sampling from -$\pi(\textbf{x}){{\,=\,}}P(\textbf{x}{{\,|\,}}\textbf{e})$.) Like simulated annealing, +$\pi(\textbf{x}){{\,=\,}}P(\textbf{x}$|$\textbf{e})$.) Like simulated annealing, Metropolis–Hastings operates in two stages. First, it samples a new -state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}{{\,|\,}}\textbf{x})$, given the current state $\textbf{x}$. +state $\textbf{x'}$ from a proposal distribution $q(\textbf{x'}$|$\textbf{x})$, given the current state $\textbf{x}$. Then, it probabilistically accepts or rejects $\textbf{x'}$ according to the acceptance probability -$$\alpha(\textbf{x'}{{\,|\,}}\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}{{\,|\,}}\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}{{\,|\,}}\textbf{x})} \right)\ .$$ +$$\alpha(\textbf{x'}$|$\textbf{x}) = \min\ \left(1,\frac{\pi(\textbf{x'})q(\textbf{x}$|$\textbf{x'})}{\pi(\textbf{x})q(\textbf{x'}$|$\textbf{x})} \right)\ .$$ If the proposal is rejected, the state remains at $\textbf{x}$.
1. Consider an ordinary Gibbs sampling step for a specific variable diff --git a/markdown/14-Probabilistic-Reasoning/exercises/ex_3/question.md b/markdown/14-Probabilistic-Reasoning/exercises/ex_3/question.md index dafb23c321..9342825740 100644 --- a/markdown/14-Probabilistic-Reasoning/exercises/ex_3/question.md +++ b/markdown/14-Probabilistic-Reasoning/exercises/ex_3/question.md @@ -3,28 +3,28 @@ Equation (parameter-joint-repn-equation on page parameter-joint-repn-equation defines the joint distribution represented by a Bayesian network in terms of the parameters -$\theta(X_i{{\,|\,}}{Parents}(X_i))$. This exercise asks you to derive +$\theta(X_i$|${Parents}(X_i))$. This exercise asks you to derive the equivalence between the parameters and the conditional probabilities -${\textbf{ P}}(X_i{{\,|\,}}{Parents}(X_i))$ from this definition.
+${\textbf{ P}}(X_i$|${Parents}(X_i))$ from this definition.
1. Consider a simple network $X\rightarrow Y\rightarrow Z$ with three Boolean variables. Use Equations (conditional-probability-equation and (marginalization-equation (pages conditional-probability-equation and marginalization-equation) - to express the conditional probability $P(z{{\,|\,}}y)$ as the ratio of two sums, each over entries in the + to express the conditional probability $P(z$|$y)$ as the ratio of two sums, each over entries in the joint distribution ${\textbf{P}}(X,Y,Z)$.
2. Now use Equation (parameter-joint-repn-equation to write this expression in terms of the network parameters - $\theta(X)$, $\theta(Y{{\,|\,}}X)$, and $\theta(Z{{\,|\,}}Y)$.
+ $\theta(X)$, $\theta(Y$|$X)$, and $\theta(Z$|$Y)$.
3. Next, expand out the summations in your expression from part (b), writing out explicitly the terms for the true and false values of each summed variable. Assuming that all network parameters satisfy the constraint - $\sum_{x_i} \theta(x_i{{\,|\,}}{parents}(X_i)){{\,=\,}}1$, show - that the resulting expression reduces to $\theta(z{{\,|\,}}y)$.
+ $\sum_{x_i} \theta(x_i$|${parents}(X_i)){{\,=\,}}1$, show + that the resulting expression reduces to $\theta(z$|$y)$.
4. Generalize this derivation to show that - $\theta(X_i{{\,|\,}}{Parents}(X_i)) = {\textbf{P}}(X_i{{\,|\,}}{Parents}(X_i))$ + $\theta(X_i$|${Parents}(X_i)) = {\textbf{P}}(X_i$|${Parents}(X_i))$ for any Bayesian network.