Skip to content

New explanations based on positive signals in watched content #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

sjay8
Copy link

@sjay8 sjay8 commented Mar 10, 2025

Enhanced Explanation Generation in produce_explanation.py:

  • Currently, the top history function relied solely on whether a user watched a piece of content. However, watching something doesn’t necessarily mean they enjoyed it.
  • A new joint function include more specific explanations based on positive watching signals
    • Explicit Signal – When a user gives a thumbs up to a title.
    • Implicit Signal – When a user watches more than 80% of the content.
    • if present, returns explicit explanation, then implicit, and finally default explanation
  • Kept the original watched function in the code for flexibility

Synthetic Data Updates in make_data.py:

  • Updated data generation to include artificial liked and watched data.
  • Assumed that only 30% of watched content receives an explicit like.
  • Used random.choice to select four random numbers, ensuring that two of them are greater than 0.80, simulating implicit engagement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant