Skip to content

CPS internals for better performance and stack safety #154

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Mar 25, 2022

Conversation

natefaubion
Copy link
Contributor

@natefaubion natefaubion commented Mar 19, 2022

Description of the change

This PR changes the internals to a CPS encoding which:

  • Is always stack safe, no need for Trampoline (runParserT also always runs in terms of MonadRec).
  • The T part of ParserT is zero-overhead if unused (pure parsers don't pay an abstraction tax). In fact, pure parsers that are otherwise using Trampoline as their base will get upgraded "for free". The T cost is only paid at each call to lift rather than in every bind, apply, etc.
  • Significantly faster (3x faster on the parse23 bench, 24ms down from 75ms, in comparison StringParsers was 14ms).
  • Doesn't need to propagate a Monad constraint everywhere, which can lead to significantly better sharing with the Lazy instance (which is now properly memoized).

I've also replaced some inefficient implementations (gets and puts with multiple binds) in the Strings module. In general, I think a lot of combinators may be able to be improved in a similar way, it's just more tedious.

Note that while string-parsers is still a bit faster, it is not stack-safe in general (so it can have less overhead). This implementation makes all parsers stack safe in their execution. I suspect that if you made string-parsers stack-safe in the same way, the libraries would have identical performance characteristics. My personal opinion is that if you accept this PR, anyone caring about stack safety has no reason to use string-parsers as a specialization.

The downside is of course that this implementation is more complicated. I'll let you be the judge!


Checklist:

  • Added the change to the changelog's "Unreleased" section with a link to this PR and your username
  • Linked any existing issues or proposals that this pull request should close
  • Updated or added relevant documentation in the README and/or documentation directory
  • Added a test for the contribution (if applicable)

@natefaubion
Copy link
Contributor Author

I should also point out that the benchmark as-is really only tests the performance and overhead of a MonadRec instance. It makes sense to me that it's 3x faster, because we are removing multiple layers of Monad transformers. I would suspect that real-world parsers would get a much bigger general boost, but I don't have benchmarks for that on hand. It would be useful to have something like a JSON parser for benchmarking.

@natefaubion natefaubion mentioned this pull request Mar 20, 2022
@natefaubion
Copy link
Contributor Author

I've added a json parser benchmark, which shows a significant improvement over string-parsers

runParser json smallJson
mean   = 298.20 μs
stddev = 303.26 μs
min    = 197.37 μs
max    = 4.33 ms
StringParser.runParser json smallJson
mean   = 583.36 μs
stddev = 582.04 μs
min    = 321.77 μs
max    = 5.26 ms
runParser json mediumJson
mean   = 6.28 ms
stddev = 1.01 ms
min    = 5.60 ms
max    = 26.05 ms
StringParser.runParser json mediumJson
mean   = 9.65 ms
stddev = 925.12 μs
min    = 8.97 ms
max    = 27.34 ms
runParser json largeJson
mean   = 23.01 ms
stddev = 5.42 ms
min    = 20.86 ms
max    = 62.86 ms
StringParser.runParser json largeJson
mean   = 34.52 ms
stddev = 3.61 ms
min    = 33.47 ms
max    = 69.35 ms

@jamesdbrock
Copy link
Member

Awesome, thanks, I will take a hard look at this and let’s get it merged.

@chtenb
Copy link
Member

chtenb commented Mar 21, 2022

Where does the edge over StringParsers mainly come from? Is it deferred error messages or actually something else? I won't pretend to understand the code which is why I'm asking.

@jamesdbrock
Copy link
Member

Where does the edge over StringParsers mainly come from? Is it deferred error messages or actually something else? I won't pretend to understand the code which is why I'm asking.

I thin it comes from the Continuation-Passing-Style representation of ParserT which replaces the ExceptT - StateT representation.

@natefaubion
Copy link
Contributor Author

Where does the edge over StringParsers mainly come from? Is it deferred error messages or actually something else? I won't pretend to understand the code which is why I'm asking.

I don't have a definitive reason, but my first guess would be that string-parsers may be using inefficient implementations. Both versions of skipSpaces are extremely inefficient, but string-parsers is especially bad since it is building an explicit List of chars, foldMapping over it to build a string, and then immediately discarding it. Parsing is at least not building an intermediate data structure and processing it (aside from the bind spine). Both should really just be a single call to String.dropWhile.

If both had optimized implementations, I would expect string-parsers to be slightly faster since it doesn't have to go through a trampoline.

@natefaubion
Copy link
Contributor Author

I've made a few more small optimizations which makes parsing equal in performance with the previous benchmarks (optimizing the MonadRec instance), and improves it to 2x string-parsers performance in the JSON parsing benchmark.

runParser parse23
mean   = 18.21 ms
stddev = 4.40 ms
min    = 15.42 ms
max    = 70.42 ms
StringParser.runParser parse23Points
mean   = 19.52 ms
stddev = 10.86 ms
min    = 14.73 ms
max    = 52.55 ms
StringParser.runParser parse23Units
mean   = 15.58 ms
stddev = 1.99 ms
min    = 14.32 ms
max    = 33.91 ms
Regex.match pattern23
mean   = 1.18 ms
stddev = 544.69 μs
min    = 966.51 μs
max    = 6.17 ms
runParser parseSkidoo
mean   = 27.76 ms
stddev = 6.74 ms
min    = 21.73 ms
max    = 85.59 ms
Regex.match patternSkidoo
mean   = 562.74 μs
stddev = 301.43 μs
min    = 447.25 μs
max    = 3.69 ms
runParser json smallJson
mean   = 249.24 μs
stddev = 248.36 μs
min    = 161.31 μs
max    = 5.49 ms
StringParser.runParser json smallJson
mean   = 434.07 μs
stddev = 403.13 μs
min    = 300.05 μs
max    = 5.80 ms
runParser json mediumJson
mean   = 4.87 ms
stddev = 337.38 μs
min    = 4.24 ms
max    = 6.46 ms
StringParser.runParser json mediumJson
mean   = 9.43 ms
stddev = 537.78 μs
min    = 8.77 ms
max    = 15.63 ms
runParser json largeJson
mean   = 17.15 ms
stddev = 686.91 μs
min    = 16.14 ms
max    = 21.73 ms
StringParser.runParser json largeJson
mean   = 33.58 ms
stddev = 895.23 μs
min    = 32.80 ms
max    = 39.27 ms

@jamesdbrock
Copy link
Member

jamesdbrock commented Mar 22, 2022

Here are some benchmarks for before and after this PR.

The benchmarks look amazing. I am seeing about 5× speedup across the board for this PR. I also see the “2× string-parsers performance”.

(One odd note which probably doesn’t matter for this PR: I separated the benchmarks which use many from the benchmarks which use manyRec. Before this PR, the manyRec benchmarks were faster. That’s surprising.)

Before this PRAfter this PR
runParser parse23
mean   = 4.03 ms
stddev = 2.90 ms
min    = 2.90 ms
max    = 31.57 ms
runParser parse23
mean   = 660.35 μs
stddev = 818.82 μs
min    = 383.34 μs
max    = 8.65 ms
StringParser.runParser parse23Points
mean   = 1.52 ms
stddev = 1.46 ms
min    = 405.67 μs
max    = 4.86 ms
StringParser.runParser parse23Points
mean   = 1.41 ms
stddev = 1.24 ms
min    = 415.32 μs
max    = 4.13 ms
StringParser.runParser parse23Units
mean   = 507.07 μs
stddev = 260.09 μs
min    = 381.62 μs
max    = 2.75 ms
StringParser.runParser parse23Units
mean   = 551.02 μs
stddev = 362.46 μs
min    = 391.51 μs
max    = 3.29 ms
runParser parse23Rec
mean   = 2.42 ms
stddev = 538.56 μs
min    = 2.22 ms
max    = 6.43 ms
runParser parse23Rec
mean   = 534.56 μs
stddev = 203.86 μs
min    = 442.47 μs
max    = 1.99 ms
StringParser.runParser parse23PointsRec
mean   = 907.10 μs
stddev = 651.06 μs
min    = 353.39 μs
max    = 2.68 ms
StringParser.runParser parse23PointsRec
mean   = 960.30 μs
stddev = 618.21 μs
min    = 391.23 μs
max    = 2.44 ms
StringParser.runParser parse23UnitsRec
mean   = 491.16 μs
stddev = 111.38 μs
min    = 429.61 μs
max    = 1.42 ms
StringParser.runParser parse23UnitsRec
mean   = 494.57 μs
stddev = 153.29 μs
min    = 428.93 μs
max    = 1.66 ms
Regex.match pattern23
mean   = 39.62 μs
stddev = 30.90 μs
min    = 28.40 μs
max    = 262.81 μs
Regex.match pattern23
mean   = 39.75 μs
stddev = 31.65 μs
min    = 28.88 μs
max    = 267.79 μs
runParser parseSkidoo
mean   = 3.45 ms
stddev = 611.82 μs
min    = 2.85 ms
max    = 7.30 ms
runParser parseSkidoo
mean   = 791.29 μs
stddev = 496.96 μs
min    = 633.50 μs
max    = 6.49 ms
runParser parseSkidooRec
mean   = 2.36 ms
stddev = 305.05 μs
min    = 2.18 ms
max    = 5.39 ms
runParser parseSkidooRec
mean   = 809.19 μs
stddev = 84.16 μs
min    = 744.72 μs
max    = 1.32 ms
Regex.match patternSkidoo
mean   = 40.31 μs
stddev = 16.30 μs
min    = 33.66 μs
max    = 232.91 μs
Regex.match patternSkidoo
mean   = 41.08 μs
stddev = 32.32 μs
min    = 32.45 μs
max    = 352.60 μs
runParser json smallJson
mean   = 740.54 μs
stddev = 296.10 μs
min    = 572.79 μs
max    = 3.96 ms
runParser json smallJson
mean   = 161.96 μs
stddev = 115.48 μs
min    = 116.17 μs
max    = 2.14 ms
StringParser.runParser json smallJson
mean   = 343.88 μs
stddev = 250.59 μs
min    = 226.67 μs
max    = 2.86 ms
StringParser.runParser json smallJson
mean   = 354.77 μs
stddev = 294.87 μs
min    = 222.72 μs
max    = 2.71 ms
runParser json mediumJson
mean   = 19.79 ms
stddev = 2.10 ms
min    = 18.54 ms
max    = 37.22 ms
runParser json mediumJson
mean   = 3.52 ms
stddev = 274.20 μs
min    = 3.17 ms
max    = 7.45 ms
StringParser.runParser json mediumJson
mean   = 6.71 ms
stddev = 287.29 μs
min    = 6.38 ms
max    = 12.05 ms
StringParser.runParser json mediumJson
mean   = 6.59 ms
stddev = 204.89 μs
min    = 6.30 ms
max    = 9.55 ms
runParser json largeJson
mean   = 66.55 ms
stddev = 4.33 ms
min    = 64.42 ms
max    = 92.17 ms
runParser json largeJson
mean   = 12.56 ms
stddev = 356.33 μs
min    = 12.03 ms
max    = 14.73 ms
StringParser.runParser json largeJson
mean   = 23.53 ms
stddev = 942.88 μs
min    = 22.99 ms
max    = 32.31 ms
StringParser.runParser json largeJson
mean   = 23.79 ms
stddev = 501.29 μs
min    = 23.27 ms
max    = 27.17 ms

Copy link
Member

@jamesdbrock jamesdbrock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am ready to merge this after we resolve the codePointAt question, see comments.

( forall r
. Fn5
(ParseState s)
((Unit -> r) -> r) -- Trampoline
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@natefaubion May I beg you for a short prose description of what is going on here. You discussed it a bit in the PR but can you please be more explicit?


-- | Match one or more times.
many1 :: forall m s a. Monad m => ParserT s m a -> ParserT s m (NonEmptyList a)
many1 :: forall m s a. ParserT s m a -> ParserT s m (NonEmptyList a)
many1 p = NEL.cons' <$> p <*> many p

-- | Match one or more times.
-- |
-- | Stack-safe version of `many1` at the expense of a `MonadRec` constraint.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think after this PR we should deprecate all of the Rec combinator variations?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not necessarily, or at least I don't think you should just remove the Rec implementations without testing. The parser is always stack-safe now, yes, but the iterative implementations may be more efficient at runtime. For example, the upstream many combinator is significantly slower than manyRec. It may be the case that we should copy over the Rec implementations and remove the calls to tailRecM, just using standard Monadic recursion. It could also be the case that the "naive" implementations perform fine in comparison and we should drop the Rec implementaiton.

@@ -211,12 +225,6 @@ match p = do
-- boundary.
pure $ Tuple (SCU.take (SCU.length input1 - SCU.length input2) input1) x

-- | The CodePoint newtype constructor is not exported, so here's a helper.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeah fromEnum is better.

Just { head, tail } -> updatePosString (updatePosSingle pos head) tail -- tail recursive
updatePosString = go 0
where
go ix pos str = case codePointAt ix str of
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah this might be a problem. codePointAt is linear in ix. So then updatePosString will be quadratic in length str.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you agree that this is a problem @natefaubion then I can fix this myself after we merge this PR,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can revert this. I was trying to avoid the extra allocations processing each char, but if it’s linear it could get out of hand.

Copy link
Contributor Author

@natefaubion natefaubion Mar 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do think this routine in general could use a bit of work, however. I believe right now it will consider \r\n as two lines since it only ever processes one codepoint at a time.

runFn2 throw state1 (ParseError "Unexpected EOF" pos)
Just { head, tail } -> do
let cp = fromEnum head
-- the `fromCharCode` function doesn't check if this is beyond the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: We’re not using fromCharCode anymore so delete this comment.

@natefaubion
Copy link
Contributor Author

Would you be willing to compare the json benchmarks with a trampolined parser? I don’t feel like 5x fully captures the improvement :D.

@natefaubion
Copy link
Contributor Author

I've added comments and reverted the position implementation. I've also updated it to account for \r\n as a single line bump. I've additionally added an esoteric, low-level splitMap combinator which is useful for any sort of combinator that needs to split the input string in a general way.

@jamesdbrock
Copy link
Member

jamesdbrock commented Mar 23, 2022

Would you be willing to compare the json benchmarks with a trampolined parser?

Yes! How do I do that? I can’t figure out how to run a Trampoline...

runTrampoline $ runParserT smallJson BenchParsing.json
  Could not match type
            
    Identity
            
  with type
                 
    Function Unit

@natefaubion
Copy link
Contributor Author

natefaubion commented Mar 23, 2022

You would need to add this to the benchmark parser:

type Parser s = ParserT s Trampoline

Right now it's using the exported Parser alias, which is fixed to Identity. Also, you need to run it with runTrampoline <<< runParserT instead of runParser.

@jamesdbrock
Copy link
Member

jamesdbrock commented Mar 23, 2022

Would you be willing to compare the json benchmarks with a trampolined parser? I don’t feel like 5x fully captures the improvement :D.

Ok here are trampolined variations of the Json parsers, from before this PR and after your latest commits.

As you claimed, there appears to be zero cost for explicitly using a Trampoline base monad after this PR. Before this PR the Trampoline-based parsers are 20× slower.

Before this PRAfter this PR
runParser parse23
mean   = 4.38 ms
stddev = 3.28 ms
min    = 3.08 ms
max    = 35.63 ms
runParser parse23
mean   = 578.59 μs
stddev = 740.09 μs
min    = 318.19 μs
max    = 7.43 ms
StringParser.runParser parse23Points
mean   = 1.35 ms
stddev = 1.18 ms
min    = 413.20 μs
max    = 3.78 ms
StringParser.runParser parse23Points
mean   = 1.35 ms
stddev = 1.15 ms
min    = 451.94 μs
max    = 3.71 ms
StringParser.runParser parse23Units
mean   = 551.18 μs
stddev = 444.48 μs
min    = 389.49 μs
max    = 4.49 ms
StringParser.runParser parse23Units
mean   = 544.46 μs
stddev = 358.24 μs
min    = 389.65 μs
max    = 2.32 ms
runParser parse23Rec
mean   = 2.79 ms
stddev = 1.15 ms
min    = 2.38 ms
max    = 12.57 ms
runParser parse23Rec
mean   = 470.80 μs
stddev = 211.09 μs
min    = 388.47 μs
max    = 1.92 ms
StringParser.runParser parse23PointsRec
mean   = 899.92 μs
stddev = 669.59 μs
min    = 353.12 μs
max    = 2.80 ms
StringParser.runParser parse23PointsRec
mean   = 1.09 ms
stddev = 756.53 μs
min    = 394.90 μs
max    = 2.97 ms
StringParser.runParser parse23UnitsRec
mean   = 494.42 μs
stddev = 111.88 μs
min    = 441.52 μs
max    = 1.48 ms
StringParser.runParser parse23UnitsRec
mean   = 493.83 μs
stddev = 146.80 μs
min    = 427.30 μs
max    = 1.63 ms
Regex.match pattern23
mean   = 39.00 μs
stddev = 30.40 μs
min    = 28.45 μs
max    = 261.29 μs
Regex.match pattern23
mean   = 40.11 μs
stddev = 33.37 μs
min    = 28.39 μs
max    = 266.46 μs
runParser parseSkidoo
mean   = 4.02 ms
stddev = 1.44 ms
min    = 3.09 ms
max    = 14.60 ms
runParser parseSkidoo
mean   = 891.84 μs
stddev = 482.92 μs
min    = 707.66 μs
max    = 6.01 ms
runParser parseSkidooRec
mean   = 2.95 ms
stddev = 1.11 ms
min    = 2.20 ms
max    = 11.71 ms
runParser parseSkidooRec
mean   = 936.94 μs
stddev = 155.78 μs
min    = 844.76 μs
max    = 2.28 ms
Regex.match patternSkidoo
mean   = 41.73 μs
stddev = 32.75 μs
min    = 33.75 μs
max    = 418.19 μs
Regex.match patternSkidoo
mean   = 35.82 μs
stddev = 13.65 μs
min    = 32.98 μs
max    = 210.77 μs
runParser json smallJson
mean   = 1.05 ms
stddev = 628.63 μs
min    = 716.20 μs
max    = 10.96 ms
runParser json smallJson
mean   = 221.71 μs
stddev = 189.12 μs
min    = 132.98 μs
max    = 2.55 ms
runTrampoline runParser json smallJson
mean   = 2.12 ms
stddev = 859.49 μs
min    = 1.65 ms
max    = 15.04 ms
runTrampoline runParser json smallJson
mean   = 151.37 μs
stddev = 48.86 μs
min    = 133.65 μs
max    = 852.50 μs
StringParser.runParser json smallJson
mean   = 345.54 μs
stddev = 296.51 μs
min    = 221.29 μs
max    = 3.25 ms
StringParser.runParser json smallJson
mean   = 338.72 μs
stddev = 279.32 μs
min    = 218.29 μs
max    = 2.91 ms
runParser json mediumJson
mean   = 33.52 ms
stddev = 5.99 ms
min    = 28.26 ms
max    = 56.88 ms
runParser json mediumJson
mean   = 3.31 ms
stddev = 210.02 μs
min    = 2.92 ms
max    = 4.23 ms
runTrampoline runParser json mediumJson
mean   = 59.56 ms
stddev = 7.05 ms
min    = 53.97 ms
max    = 105.39 ms
runTrampoline runParser json mediumJson
mean   = 3.30 ms
stddev = 216.19 μs
min    = 2.92 ms
max    = 4.76 ms
StringParser.runParser json mediumJson
mean   = 6.58 ms
stddev = 289.48 μs
min    = 6.24 ms
max    = 11.21 ms
StringParser.runParser json mediumJson
mean   = 6.43 ms
stddev = 232.45 μs
min    = 6.10 ms
max    = 10.56 ms
runParser json largeJson
mean   = 113.75 ms
stddev = 12.56 ms
min    = 101.08 ms
max    = 144.13 ms
runParser json largeJson
mean   = 11.72 ms
stddev = 285.68 μs
min    = 11.36 ms
max    = 13.28 ms
runTrampoline runParser json largeJson
mean   = 206.86 ms
stddev = 15.23 ms
min    = 190.86 ms
max    = 249.25 ms
runTrampoline runParser json largeJson
mean   = 12.03 ms
stddev = 1.15 ms
min    = 11.35 ms
max    = 21.88 ms
StringParser.runParser json largeJson
mean   = 23.24 ms
stddev = 1.61 ms
min    = 22.29 ms
max    = 35.80 ms
StringParser.runParser json largeJson
mean   = 23.00 ms
stddev = 1.72 ms
min    = 22.09 ms
max    = 36.02 ms

@natefaubion
Copy link
Contributor Author

I've updated the interface to splitMap and added a changelog entry.

@jamesdbrock
Copy link
Member

CI is failing due to purescript/purescript-control#80

@jamesdbrock
Copy link
Member

jamesdbrock commented Mar 24, 2022

I am gonna merge this pretty soon, after which I'll fix the <?> associativity problem. I'll probably make it infixr 0.

@natefaubion
Copy link
Contributor Author

I've gone ahead and updated them for now so that it compiles. It's up for bikeshedding, however.

@jamesdbrock jamesdbrock merged commit 51d2843 into purescript-contrib:main Mar 25, 2022
@jamesdbrock
Copy link
Member

Thank you @natefaubion 💙

@natefaubion natefaubion deleted the cps-internals branch March 25, 2022 15:57
triallax added a commit to triallax/insect that referenced this pull request May 27, 2022
You may wonder why all these changes are done in one commit, and that's
because they're all kind of interrelated:
- I initially updated PureScript to 0.15.x, which produces code that
  uses ES modules instead of CommonJS ones
- The version of `purescript-decimals` in the 0.15.x package set was
  tested with decimal.js 10.3.1, so use that
- Since we're using ES modules now, I updated `clipboardy` and
  `xdg-basedir` to versions that use ES modules (I admit that I didn't
  have to do this one in this commit)
- `pulp browserify` doesn't work with PureScript 0.15.x, so I had to
  migrate to something else (I chose `esbuild`)

Side note: `purescript-parsing` 9.0.0 brings with it a wonderful
performance boost[1], which speeds up the test suite considerably.

[1]: purescript-contrib/purescript-parsing#154
triallax added a commit to triallax/insect that referenced this pull request May 27, 2022
You may wonder why all these changes are done in one commit, and that's
because they're all kind of interrelated:
- I initially updated PureScript to 0.15.x, which produces code that
  uses ES modules instead of CommonJS ones
- The version of `purescript-decimals` in the 0.15.x package set was
  tested with decimal.js 10.3.1, so use that
- Since we're using ES modules now, I updated `clipboardy` and
  `xdg-basedir` to versions that use ES modules (I admit that I didn't
  have to do this one in this commit)
- `pulp browserify` doesn't work with PureScript 0.15.x, so I had to
  migrate to something else (I chose `esbuild`)

Side note: `purescript-parsing` 9.0.0 brings with it a wonderful
performance boost[1], which speeds up the test suite by 3 times on my
machine (from `102.231 s ± 0.485 s` to `34.666 s ± 0.299 s`).

[1]: purescript-contrib/purescript-parsing#154
triallax added a commit to triallax/insect that referenced this pull request Jun 2, 2022
You may wonder why all these changes are done in one commit, and that's
because they're all kind of interrelated:
- I initially updated PureScript to 0.15.x, which produces code that
  uses ES modules instead of CommonJS ones
- The version of `purescript-decimals` in the 0.15.x package set was
  tested with decimal.js 10.3.1, so use that
- Since we're using ES modules now, I updated `clipboardy` and
  `xdg-basedir` to versions that use ES modules (I admit that I didn't
  have to do this one in this commit)
- `pulp browserify` doesn't work with PureScript 0.15.x, so I had to
  migrate to something else (I chose `esbuild`)

Side note: `purescript-parsing` 9.0.0 brings with it a wonderful
performance boost[1], which speeds up the test suite by 3 times on my
machine (from `102.231 s ± 0.485 s` to `34.666 s ± 0.299 s`).

[1]: purescript-contrib/purescript-parsing#154
triallax added a commit to triallax/insect that referenced this pull request Jun 2, 2022
You may wonder why all these changes are done in one commit, and that's
because they're all kind of interrelated:
- I initially updated PureScript to 0.15.x, which produces code that
  uses ES modules instead of CommonJS ones
- The version of `purescript-decimals` in the 0.15.x package set was
  tested with decimal.js 10.3.1, so use that
- Since we're using ES modules now, I updated `clipboardy` and
  `xdg-basedir` to versions that use ES modules (I admit that I didn't
  have to do this one in this commit)
- `pulp browserify` doesn't work with PureScript 0.15.x, so I had to
  migrate to something else (I chose `esbuild`)

Side note: `purescript-parsing` 9.0.0 brings with it a wonderful
performance boost[1], which speeds up the test suite by 3 times on my
machine (from `102.231 s ± 0.485 s` to `34.666 s ± 0.299 s`).

[1]: purescript-contrib/purescript-parsing#154
triallax added a commit to triallax/insect that referenced this pull request Jun 7, 2022
You may wonder why all these changes are done in one commit, and that's
because they're all kind of interrelated:
- I initially updated PureScript to 0.15.x, which produces code that
  uses ES modules instead of CommonJS ones
- The version of `purescript-decimals` in the 0.15.x package set was
  tested with decimal.js 10.3.1, so use that
- Since we're using ES modules now, I updated `clipboardy` and
  `xdg-basedir` to versions that use ES modules (I admit that I didn't
  have to do this one in this commit)
- `pulp browserify` doesn't work with PureScript 0.15.x, so I had to
  migrate to something else (I chose `esbuild`)

Side note: `purescript-parsing` 9.0.0 brings with it a wonderful
performance boost[1], which speeds up the test suite by 3 times on my
machine (from `102.231 s ± 0.485 s` to `34.666 s ± 0.299 s`).

[1]: purescript-contrib/purescript-parsing#154
triallax added a commit to triallax/insect that referenced this pull request Jun 9, 2022
You may wonder why all these changes are done in one commit, and that's
because they're all kind of interrelated:
- I initially updated PureScript to 0.15.x, which produces code that
  uses ES modules instead of CommonJS ones
- The version of `purescript-decimals` in the 0.15.x package set was
  tested with decimal.js 10.3.1, so use that
- Since we're using ES modules now, I updated `clipboardy` and
  `xdg-basedir` to versions that use ES modules (I admit that I didn't
  have to do this one in this commit)
- `pulp browserify` doesn't work with PureScript 0.15.x, so I had to
  migrate to something else (I chose `esbuild`)

Side note: `purescript-parsing` 9.0.0 brings with it a wonderful
performance boost[1], which speeds up the test suite by 3 times on my
machine (from `102.231 s ± 0.485 s` to `34.666 s ± 0.299 s`).

[1]: purescript-contrib/purescript-parsing#154
sharkdp pushed a commit to sharkdp/insect that referenced this pull request Jun 9, 2022
You may wonder why all these changes are done in one commit, and that's
because they're all kind of interrelated:
- I initially updated PureScript to 0.15.x, which produces code that
  uses ES modules instead of CommonJS ones
- The version of `purescript-decimals` in the 0.15.x package set was
  tested with decimal.js 10.3.1, so use that
- Since we're using ES modules now, I updated `clipboardy` and
  `xdg-basedir` to versions that use ES modules (I admit that I didn't
  have to do this one in this commit)
- `pulp browserify` doesn't work with PureScript 0.15.x, so I had to
  migrate to something else (I chose `esbuild`)

Side note: `purescript-parsing` 9.0.0 brings with it a wonderful
performance boost[1], which speeds up the test suite by 3 times on my
machine (from `102.231 s ± 0.485 s` to `34.666 s ± 0.299 s`).

[1]: purescript-contrib/purescript-parsing#154
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants