-
Notifications
You must be signed in to change notification settings - Fork 0
Reflections 2021
2016 / 2018 / 2019 / 2020 / 2021 / 2022 / 2023 / 2024
- Day 1
- Day 2
- Day 3 (benchmark only)
- Day 4 (benchmark only)
- Day 5 (benchmark only)
- Day 6 (benchmark only)
- Day 7 (benchmark only)
- Day 8 (benchmark only)
- Day 9 (benchmark only)
- Day 10 (benchmark only)
- Day 11 (benchmark only)
- Day 12 (benchmark only)
- Day 13 (benchmark only)
- Day 14 (benchmark only)
- Day 15 (benchmark only)
- Day 16 (benchmark only)
- Day 17 (benchmark only)
Top / Prompt / Code / Standalone
As a simple data processing thing, this one shines pretty well in Haskell :)
Assuming we have a list, we can get the consecutive items with a combination of
zipWith
and drop
. Then we can just count how many pairs of items match the
predicate (strictly increasing):
countIncreasesPart1 :: [Int] -> Int
countIncreasesPart1 xs = length (filter (== True) (zipWith (<) xs (drop 1 xs)))
Yes, filter (== True)
is the same as filter id
, but it's a bit easier to
read this way :)
Remember that if xs
is [2,4,6,5]
, then drop 1 xs
is [4,6,5]
, and so
zip xs (drop 1 xs)
is [(2,4), (4,6), (6,5)]
So zipWith (<) xs (drop 1 xs)
is [True, True, False]
. So counting all of the True
items yields the
right answer!
Part 2 is very similar, but we need to check if items three positions apart
are increasing. That's because for each window, the sum of the window is
increasing if the new item gained is bigger than the item that was just lost.
So for an example like [3,5,6,4,7,8]
, as we move from [3,5,6]
to [5,6,4]
,
we only need to check if 4
is greater than 3
. So we only need to compare 4
and 3, 7 and 5, and then 8 and 6.
countIncreasesPart2 :: [Int] -> Int
countIncreasesPart2 xs = length (filter (== True) (zipWith (<) xs (drop 3 xs)))
We just need to replace drop 1 xs
with drop 3 xs
to compare three-away
items.
Anyway the parsing in Haskell is straightforward, at least -- we can just do
map read . lines
, to split our input into lines and then map read :: String -> Int
over each line. Ta dah! Fun start to the year :)
>> Day 01a
benchmarking...
time 21.56 μs (21.53 μs .. 21.59 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 21.55 μs (21.53 μs .. 21.58 μs)
std dev 79.03 ns (62.41 ns .. 111.6 ns)
* parsing and formatting times excluded
>> Day 01b
benchmarking...
time 19.43 μs (19.39 μs .. 19.47 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 19.36 μs (19.31 μs .. 19.41 μs)
std dev 146.8 ns (116.7 ns .. 199.0 ns)
* parsing and formatting times excluded
Top / Prompt / Code / Standalone
Day 2 has a satisfying "unified" solution for both parts that can be derived from group theory! The general group (or monoid) design pattern that I've gone over in many Advent of Code blog posts is that we can think of our "final action" as simply a "squishing" of individual smaller actions. The revelation is that our individual smaller actions are "combinable" to yield something of the same type, so solving the puzzle is generating all of the smaller actions repeatedly combining them to yield the final action.
In both of these parts, we can think of squishing a bunch of small actions
(forward
, up
, down
) into a mega-action, which represents the final trip
as one big step. So here is our general solver:
-- | A type for x-y coordinates/2d vectors
data Point = P { pX :: Int, pY :: Int }
day02
:: Monoid r
=> (Int -> r) -- ^ construct a forward action
-> (Int -> r) -- ^ construct an up/down action
-> (r -> Point -> Point) -- ^ how to apply an action to a point
-> String
-> Point -- ^ the final point
day02 mkForward mkUpDown applyAct =
(`applyAct` P 0 0) -- get the answer by applying from 0,0
. foldMap (parseAsDir . words) -- convert each line into the action and merge
. lines -- split up lines
where
parseAsDir (dir:n:_) = case dir of
"forward" -> mkForward amnt
"down" -> mkUpDown amnt
"up" -> mkUpDown (-amnt)
where
amnt = read n
And there we have it! A solver for both parts 1 and 2. Now we just need to pick the Monoid :)
For part 1, the choice of monoid is simple: our final action is a translation by a vector, so it makes sense that the intermediate actions would be vectors as well -- composing actions means adding those vectors together.
data Vector = V { dX :: Int, dY :: Int }
instance Semigroup Vector where
V dx dy <> V dx' dy' = V (dx + dx') (dy + dy')
instance Monoid Vector where
mempty = V 0 0
day02a :: String -> Int
day02a = day02
(\dx -> V dx 0) -- forward is just a horizontal vector
(\dy -> V 0 dy) -- up/down is a vertical vector
(\(V dx dy) (P x0 y0) -> P (x0 + dx) (y0 + dy))
Part 2 is a little trickier because we have to keep track of dx, dy and aim.
So we can think of our action as manipulating a Point
as well as an Aim
,
and combining them together.
newtype Aim = Aim Int
instance Semigroup Aim where
Aim a <> Aim b = Aim (a + b)
instance Monoid Aim where
mempty = Aim 0
So our "action" looks like:
data Part2Action = P2A { p2Vector :: Vector, p2Aim :: Aim }
However, it's not exactly obvious how to turn this into a monoid. How do we
combine two Part2Action
s to create a new one, in a way that respects the
logic of part 2? Simply adding them point-wise does not do the trick, because
we have to somehow also get the Aim
to factor into the new y value.
Group theory to the rescue! Using the monoid-extras library, we can
can say that Aim
encodes a "vector transformer". Applying an aim means adding
the dy value by the aim value multiplied the dx component.
instance Action Aim Vector where
act (Aim a) = moveDownByAimFactor
where
moveDownByAimFactor (V dx dy) = V dx (dy + a * dx)
Because of this, we can now pair together Vector
and Aim
as a semi-direct
product: If we pair up our monoid (Vector
) with a "point transformer"
(Aim
), then Semi Vector Aim
is a monoid that contains both (like our
Part2Action
above) but also provides a Monoid
instance that "does the right
thing" (adds vector, adds aim, and also makes sure the aim action gets applied
correctly when adding vectors) thanks to group theory.
-- constructors/deconstructors that monoid-extras gives us
inject :: Vector -> Semi Vector Aim
embed :: Aim -> Semi Vector Aim
untag :: Semi Vector Aim -> Vector
day02b :: String -> Int
day02b = day02
(\dx -> inject $ V dx 0) -- forward just contributs a vector motion
(\a -> embed $ Aim a ) -- up/down just adjusts the aim
(\sdp (P x0 y0) ->
let V dx dy = untag sdp
in P (x0 + dx) (y0 + dy)
)
And that's it, we're done, thanks to the power of group theory! We identified
that our final monoid must somehow contain both components (Vector
, and
Aim
), but did not know how the two could come together to form a mega-monoid
of both. However, because we saw that Aim
also gets accumulated while also
acting as a "point transformer", we can describe how it transforms points (with
the Action
instance) and so we can use Semi
(semi-direct product) to encode
our action with a Monoid
instance that does the right thing.
What was the point of this? Well, we were able to unify both parts 1 and 2 to
be solved in the same overall method, just by picking a different monoid for
each part. With only two parts, it might not seem that worth it to abstract,
but maybe if there were more we could experiment with what other neat monoids
we could express our solution as! But, a major advantage we reap now is that,
because each action combines into other actions (associatively), we could do
all of this in parallel! If our list of actions was very long, we could
distribute the work over multiple cores or computers and re-combine like a
map-reduce. There's just something very satisfying about having the "final
action" be of the same type as our intermediate actions. With that
revelation, we open the door to the entire field of monoid-based optimizations
and pre-made algorithms (like Semi
)
(Thanks to mniip
in libera irc's #adventofcode channel for helping me
express this in terms of a semi-direct product! My original attempt used a 4x4
matrix that ended up doing the same thing after some symbolic analysis.)
(Thanks too to @lysxia on twitter for pointing out a nicer way of interpreting the action in terms of how it acts on points!)
>> Day 02a
benchmarking...
time 2.068 μs (2.065 μs .. 2.071 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 2.085 μs (2.075 μs .. 2.108 μs)
std dev 50.18 ns (32.67 ns .. 74.33 ns)
variance introduced by outliers: 29% (moderately inflated)
* parsing and formatting times excluded
>> Day 02b
benchmarking...
time 959.8 μs (957.4 μs .. 962.7 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 954.7 μs (952.9 μs .. 957.2 μs)
std dev 7.530 μs (6.116 μs .. 9.523 μs)
>> Day 03a
benchmarking...
time 443.6 μs (442.8 μs .. 444.6 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 445.0 μs (444.4 μs .. 445.5 μs)
std dev 1.850 μs (1.556 μs .. 2.238 μs)
* parsing and formatting times excluded
>> Day 03b
benchmarking...
time 275.0 μs (274.6 μs .. 275.3 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 273.4 μs (272.7 μs .. 274.0 μs)
std dev 2.242 μs (1.825 μs .. 2.680 μs)
* parsing and formatting times excluded
>> Day 04a
benchmarking...
time 226.6 μs (226.0 μs .. 227.1 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 225.2 μs (224.6 μs .. 225.7 μs)
std dev 1.823 μs (1.438 μs .. 2.116 μs)
* parsing and formatting times excluded
>> Day 04b
benchmarking...
time 544.0 μs (537.8 μs .. 551.2 μs)
0.999 R² (0.998 R² .. 1.000 R²)
mean 541.4 μs (539.2 μs .. 545.3 μs)
std dev 10.43 μs (6.177 μs .. 18.27 μs)
variance introduced by outliers: 10% (moderately inflated)
* parsing and formatting times excluded
>> Day 05a
benchmarking...
time 2.276 ms (2.245 ms .. 2.303 ms)
0.998 R² (0.997 R² .. 0.999 R²)
mean 2.239 ms (2.219 ms .. 2.262 ms)
std dev 80.14 μs (65.88 μs .. 107.1 μs)
variance introduced by outliers: 21% (moderately inflated)
* parsing and formatting times excluded
>> Day 05b
benchmarking...
time 12.80 ms (12.71 ms .. 12.88 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 12.87 ms (12.83 ms .. 12.89 ms)
std dev 79.03 μs (63.31 μs .. 100.3 μs)
* parsing and formatting times excluded
>> Day 06a
benchmarking...
time 5.432 μs (5.421 μs .. 5.442 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 5.440 μs (5.431 μs .. 5.446 μs)
std dev 27.96 ns (24.50 ns .. 34.53 ns)
* parsing and formatting times excluded
>> Day 06b
benchmarking...
time 5.431 μs (5.420 μs .. 5.445 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 5.419 μs (5.415 μs .. 5.430 μs)
std dev 20.29 ns (14.85 ns .. 27.37 ns)
* parsing and formatting times excluded
>> Day 07a
benchmarking...
time 257.5 μs (256.2 μs .. 259.9 μs)
0.998 R² (0.995 R² .. 1.000 R²)
mean 257.4 μs (256.0 μs .. 262.6 μs)
std dev 8.661 μs (488.2 ns .. 18.40 μs)
variance introduced by outliers: 29% (moderately inflated)
* parsing and formatting times excluded
>> Day 07b
benchmarking...
time 269.8 μs (269.4 μs .. 270.4 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 269.9 μs (269.7 μs .. 270.6 μs)
std dev 1.198 μs (979.5 ns .. 1.585 μs)
* parsing and formatting times excluded
>> Day 08a
benchmarking...
time 30.57 μs (30.52 μs .. 30.61 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 30.54 μs (30.52 μs .. 30.58 μs)
std dev 92.33 ns (75.65 ns .. 122.4 ns)
* parsing and formatting times excluded
>> Day 08b
benchmarking...
time 312.0 μs (306.1 μs .. 318.4 μs)
0.998 R² (0.996 R² .. 0.999 R²)
mean 318.3 μs (314.4 μs .. 327.4 μs)
std dev 18.23 μs (7.567 μs .. 33.67 μs)
variance introduced by outliers: 54% (severely inflated)
* parsing and formatting times excluded
>> Day 09a
benchmarking...
time 851.5 μs (849.9 μs .. 853.1 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 852.0 μs (850.8 μs .. 853.2 μs)
std dev 4.482 μs (3.088 μs .. 6.365 μs)
* parsing and formatting times excluded
>> Day 09b
benchmarking...
time 2.703 ms (2.697 ms .. 2.710 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 2.679 ms (2.673 ms .. 2.684 ms)
std dev 19.23 μs (16.05 μs .. 25.50 μs)
* parsing and formatting times excluded
>> Day 10a
benchmarking...
time 38.41 μs (38.32 μs .. 38.55 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 38.36 μs (38.31 μs .. 38.44 μs)
std dev 237.6 ns (188.0 ns .. 331.4 ns)
* parsing and formatting times excluded
>> Day 10b
benchmarking...
time 44.86 μs (44.24 μs .. 45.50 μs)
0.999 R² (0.998 R² .. 1.000 R²)
mean 44.10 μs (43.95 μs .. 44.43 μs)
std dev 764.1 ns (402.1 ns .. 1.398 μs)
variance introduced by outliers: 13% (moderately inflated)
* parsing and formatting times excluded
>> Day 11a
benchmarking...
time 5.651 ms (5.620 ms .. 5.739 ms)
0.998 R² (0.992 R² .. 1.000 R²)
mean 5.681 ms (5.654 ms .. 5.762 ms)
std dev 146.8 μs (17.72 μs .. 280.0 μs)
* parsing and formatting times excluded
>> Day 11b
benchmarking...
time 11.18 ms (11.15 ms .. 11.21 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 11.19 ms (11.18 ms .. 11.20 ms)
std dev 28.54 μs (21.73 μs .. 37.19 μs)
* parsing and formatting times excluded
>> Day 12a
benchmarking...
time 2.712 ms (2.701 ms .. 2.726 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 2.706 ms (2.702 ms .. 2.714 ms)
std dev 15.19 μs (6.637 μs .. 27.74 μs)
>> Day 12b
benchmarking...
time 83.12 ms (82.73 ms .. 83.45 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 83.35 ms (83.21 ms .. 83.59 ms)
std dev 330.7 μs (201.8 μs .. 499.5 μs)
>> Day 13a
benchmarking...
time 178.2 μs (178.1 μs .. 178.4 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 178.0 μs (177.9 μs .. 178.2 μs)
std dev 481.7 ns (410.0 ns .. 569.1 ns)
* parsing and formatting times excluded
>> Day 13b
benchmarking...
time 445.8 μs (444.9 μs .. 446.8 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 446.2 μs (445.5 μs .. 446.8 μs)
std dev 2.269 μs (1.865 μs .. 2.807 μs)
* parsing and formatting times excluded
>> Day 14a
benchmarking...
time 168.0 μs (167.7 μs .. 168.4 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 168.3 μs (168.1 μs .. 168.5 μs)
std dev 639.8 ns (510.7 ns .. 882.4 ns)
* parsing and formatting times excluded
>> Day 14b
benchmarking...
time 704.3 μs (703.4 μs .. 705.1 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 704.4 μs (703.7 μs .. 705.2 μs)
std dev 2.520 μs (1.972 μs .. 3.377 μs)
* parsing and formatting times excluded
>> Day 15a
benchmarking...
time 36.30 ms (35.96 ms .. 36.74 ms)
1.000 R² (0.999 R² .. 1.000 R²)
mean 35.88 ms (35.65 ms .. 36.06 ms)
std dev 476.5 μs (322.4 μs .. 689.0 μs)
* parsing and formatting times excluded
>> Day 15b
benchmarking...
time 1.450 s (1.414 s .. 1.525 s)
1.000 R² (0.999 R² .. 1.000 R²)
mean 1.447 s (1.435 s .. 1.458 s)
std dev 13.92 ms (8.694 ms .. 16.94 ms)
variance introduced by outliers: 19% (moderately inflated)
* parsing and formatting times excluded
>> Day 16a
benchmarking...
time 294.1 μs (289.8 μs .. 300.9 μs)
0.998 R² (0.996 R² .. 1.000 R²)
mean 290.0 μs (288.8 μs .. 292.2 μs)
std dev 5.852 μs (2.390 μs .. 11.73 μs)
variance introduced by outliers: 13% (moderately inflated)
* parsing and formatting times excluded
>> Day 16b
benchmarking...
time 334.9 μs (333.1 μs .. 336.4 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 333.7 μs (332.7 μs .. 334.4 μs)
std dev 2.457 μs (1.993 μs .. 2.881 μs)
* parsing and formatting times excluded
>> Day 17a
benchmarking...
time 5.192 ms (5.152 ms .. 5.246 ms)
0.999 R² (0.999 R² .. 1.000 R²)
mean 5.213 ms (5.191 ms .. 5.236 ms)
std dev 75.12 μs (54.28 μs .. 106.9 μs)
* parsing and formatting times excluded
>> Day 17b
benchmarking...
time 5.168 ms (5.142 ms .. 5.199 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 5.200 ms (5.186 ms .. 5.216 ms)
std dev 46.42 μs (39.03 μs .. 58.25 μs)
* parsing and formatting times excluded