Skip to content

Conversation

aloctavodia
Copy link
Member

@aloctavodia aloctavodia commented Mar 19, 2024

Not sure if this is all that needs to be done.

The warning message disappeared and sampling provides the same result as without this change. But speed is still the same.

The code for the c implementation is from here https://github.com/codeplea/incbeta, there is a blog post associated https://codeplea.com/incomplete-beta-function-c

I take the files from https://www.netlib.org/cephes/ the code does not compile. Maybe I am missingsome extra files...

@ricardoV94 ricardoV94 changed the title Add betain c implementation Add betainc C implementation Mar 19, 2024
@arthus701
Copy link
Contributor

arthus701 commented May 31, 2024

Hi everyone,

just wanted to mention that based on the work by @aloctavodia and the existing gamma.c file, I did some polishing to the cephes incbeta.c and got it to compile on the CPU. Will test GPU later. I'm not sure what the github best practice is when hijacking someone else's pull request. Maybe you can have a look and let me know if this is of value and how to proceed.

Stuff is over at my fork, I had to branch off an earlier version of pytensor to keep compatible with my PyMC installation, but I guess this is an easy fix.

Cheers and all the best,
Arthus

Edit / PS: I wanted to mention that I found the debug mode and used it for testing the implementation. What an amazing feature!

@aloctavodia
Copy link
Member Author

This is great @arthus701. Thanks.

@arthus701
Copy link
Contributor

Just tested it on a GPU and compiling works. I don't see a speedup though.

@ricardoV94
Copy link
Member

What do you mean you tested on the GPU? We don't have native GPU support in this backend (C-backend).

I also don't expect speedups in many cases. The warning is a bit too crude, we should probably disable it

@ricardoV94
Copy link
Member

Maybe you can have a look and let me know if this is of value and how to proceed.

You can push your changes directly to this PR or open a new PR whatever suits you better

@arthus701
Copy link
Contributor

What do you mean you tested on the GPU? We don't have native GPU support in this backend (C-backend).

It seems I misunderstood what is going on. When using
pt.exp(pm.logcdf(pm.Beta.dist(...))) I got the warning that no C implementation of BetaInc is found. From the way gamma.c is written I figured that either a standard C compiler or a cuda compiler would be used, in case flexible enough C code is available. What I mean by I tested it on the GPU is I ran my code both on a machine that doesn't have a GPU and one that has one and the warnings (about no loop fusion etc.) disappeared. Could you maybe spent two sentences on why the warning is there in the first place/when the c_code is actually useful?

Thanks and have a nice weekend,
Arthus

@ricardoV94
Copy link
Member

If the warning disappeared it probably meant the GPU machine was not finding the C-linker, and was running in python mode. Which should be slower anyway.

Also what sampler where you using? If using a jax or numba-based sampler, the C backend does not matter. The warning is still emitted though (see #140).

@arthus701
Copy link
Contributor

arthus701 commented May 31, 2024

Ah, okay. I'm using jax and was seeing a big increase in compute time when including the beta-cdf, but likely that's related to the more complicated model then and not to the C code as I was figuring from the warning. Thanks a lot for your explanation.

Does the warning disappearing still indicate a problem?

Anyway, if you think my work is of use I can update my branch to the most recent commits and open a separate PR on monday, just let me know.

Cheers.

@ricardoV94
Copy link
Member

ricardoV94 commented May 31, 2024

Does the warning disappearing still indicate a problem?

Likely not, although sample_prior_predictive / sample_posterior_predictive may be a bit slower. I'm curious in what part of the workflow where you seeing the warning. Could you set warnings.simple_filter("error") and provide a traceback to see why this rewrite was even being triggered in a JAX workflow?

Anyway, if you think my work is of use I can update my branch to the most recent commits and open a separate PR on monday, just let me know.

Since the works is already done, no reason not to get it merged. Would be much appreciated!

I opened #794 to avoid wasting people's time in the future. This warning is not useful in the current state of things.

@arthus701
Copy link
Contributor

Sure thing, will address both points on Monday.

@arthus701
Copy link
Contributor

Is this the output you're after?

ERROR (pytensor.graph.rewriting.basic): SequentialGraphRewriter apply <pytensor.tensor.rewriting.elemwise.FusionOptimizer object at 0x7f7384eb2e90>
ERROR (pytensor.graph.rewriting.basic): Traceback:
ERROR (pytensor.graph.rewriting.basic): Traceback (most recent call last):
  File "/home/arthus/Documents/pytensor/pytensor/graph/rewriting/basic.py", line 291, in apply
    sub_prof = rewriter.apply(fgraph)
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/arthus/Documents/pytensor/pytensor/tensor/rewriting/elemwise.py", line 1028, in apply
    for inputs, outputs in find_next_fuseable_subgraph(fgraph):
  File "/home/arthus/Documents/pytensor/pytensor/tensor/rewriting/elemwise.py", line 999, in find_next_fuseable_subgraph
    fuseable_clients, unfuseable_clients = initialize_fuseable_mappings(fg=fg)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/arthus/Documents/pytensor/pytensor/tensor/rewriting/elemwise.py", line 738, in initialize_fuseable_mappings
    and elemwise_scalar_op_has_c_code(client)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/arthus/Documents/pytensor/pytensor/tensor/rewriting/elemwise.py", line 702, in elemwise_scalar_op_has_c_code
    warn(
UserWarning: Optimization Warning: The Op betainc does not provide a C implementation. As well as being potentially slow, this also disables loop fusion.

@arthus701 arthus701 mentioned this pull request Jun 3, 2024
6 tasks
@ricardoV94
Copy link
Member

Is this the output you're after?

That traceback is truncated, can you post the whole thing?

@arthus701
Copy link
Contributor

arthus701 commented Jun 3, 2024

I'm sorry, but that's all I get from using warnings.simplefilter("error"). Any idea what I should modify to not get a truncated output on my end? I toyed around a bit with the traceback module, but had no success yet...

Edit: I realized that setting the simplefilter did extend the message, but the warning is not being caught, i.e. the program continues to run. Is that a known issue with pyMC/pytensor?

@ricardoV94
Copy link
Member

Strange, the traceback should show something being called from PyMC at least. No worries if it's not then

@ricardoV94 ricardoV94 closed this Jul 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants