Skip to content

Luhn exercise: are you sure about the checksums in the tests? #284

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
oalbe opened this issue Jan 24, 2016 · 14 comments
Closed

Luhn exercise: are you sure about the checksums in the tests? #284

oalbe opened this issue Jan 24, 2016 · 14 comments

Comments

@oalbe
Copy link
Contributor

oalbe commented Jan 24, 2016

Hey there,

I was solving the exercise luhn.py and I'm noticing quite a weird thing.
When I did this exercise for the JavaScript track, some of the checksums were different from what they are in the Python track. Isn't the Luhn method supposed to generate always the same results regardless of the language?

Here are some example:
luhn_test.py

def test_checksum1(self):
        self.assertEqual(2, Luhn(4913).checksum())

def test_ckecksum2(self):
        self.assertEqual(1, Luhn(201773).checksum())

luhn.spec.js

it('checksum',function() {
    var luhn = new Luhn(4913);
    expect(luhn.checksum).toEqual(22);
  });

it('checksum again',function() {
    var luhn = new Luhn(201773);
    expect(luhn.checksum).toEqual(21);
  });

As you can see, the numbers to check are the same, but the results differ from the two languages.
Is this an error? (in which case I can correct it now and submit a pull request) Or is it the way it is supposed to be? (unlikely)

Thank you.

@kytrinyx
Copy link
Member

I'm not sure about the checksums in the tests--they should be the same. It would be great to figure out what the correct inputs/outputs are and then extract all the data into a luhn.json file per https://github.com/exercism/todo/issues/110

Then we could normalize all of the tracks that have this exercise.

If you have time to figure out what the right numbers are, would you mind adding them to exercism/todo#110? If you have time to do a pull request, that would also be excellent!

@oalbe
Copy link
Contributor Author

oalbe commented Jan 25, 2016

Yep, I can do that, but it will take me a while since I don't have much free time at the moment. I will submit a comment there as soon as I will have the thing sorted out. And I apologize in advance if this will end up protracting for a bit.

@kytrinyx
Copy link
Member

No worries at all -- that's the nature of this beast. Now that it's documented, someone else might have time to take a look as well.

@behrtam
Copy link
Contributor

behrtam commented Jan 25, 2016

Not all languages that implement this exercise use those two numbers in their tests, but for example ruby and java match with javascript. Looks like the Python checksums are wrong.
http://www.ee.unb.ca/cgi-bin/tervo/luhn.pl?N=4913
http://www.ee.unb.ca/cgi-bin/tervo/luhn.pl?N=201773

@behrtam
Copy link
Contributor

behrtam commented Feb 8, 2016

@oalbe Do you want to make the small adjustment to the luhn implementation?

@oalbe
Copy link
Contributor Author

oalbe commented Feb 8, 2016

@behrtam I thought I did. I mean, I created the luhn.json with the corrected values for the exercise. I thought it was automatically generated from there. Am I wrong?

@behrtam
Copy link
Contributor

behrtam commented Feb 8, 2016

Sadly the test suite is not generated automatically (yet #271). Also the python example implementation needs to be changed slightly according to the new test cases.

I could do those small changes but only if you don't want to.

@oalbe
Copy link
Contributor Author

oalbe commented Feb 8, 2016

Oh I see. I can totally do the changes if you prefer, so you can focus on something else. But not right this moment, probably in some hours or tomorrow.

@behrtam
Copy link
Contributor

behrtam commented Feb 8, 2016

Nice, I don't think it's urgent so take all the time you need.

@oalbe
Copy link
Contributor Author

oalbe commented Feb 9, 2016

@behrtam I was working on the resolution of this issue, but I noticed a problem. My fork of the repo didn't sync with the original repo (which is, this one). I am assuming this is because of the pending docs PR I made on the master branch directly. Fact is that I wouldn't want to mess something up by committing to a not-updated fork and then pull-requesting from there.

Any idea how to solve this issue?

@kytrinyx
Copy link
Member

If you already have your luhn stuff on master, then my suggestion is to create a backup:

git checkout -b luhn-backup

Then reset master to whatever is on upstream, check out a new branch (e.g. luhn-checksums) and then cherrypick the luhn commit(s) to the new, clean branch.

To make sure your master is exactly what is upstream you can run:

git checkout master && git fetch upstream master && git reset --hard upstream/master

(this assumes that you have this repo, exercism/xpython, as a remote named upstream. I'm assuming that your fork is origin).

@oalbe
Copy link
Contributor Author

oalbe commented Feb 14, 2016

@kytrinyx Actually I don't have the fixes for this solution pushed, I was doing things locally when I realized the problem, but can revert anytime. What I'm not sure about is: wouldn't a reset on the master of my fork automatically close the open PR from master (the one with the two docfiles)? I mean, if I reset my fork to the state of exercism/xpython then the doc files will get deleted and this should theoretically close the PR, right?

That's the only problem basically, otherwise I would have already reset.

@kytrinyx
Copy link
Member

It would only close it if you pushed your reset up to the fork. If you wait until it's merged then you'll be fine.

@oalbe
Copy link
Contributor Author

oalbe commented Feb 15, 2016

I think I'll wait. The reset should be automatically pushed to my fork when I push the fix for this issue, and I want to avoid making new messes. Thank you @kytrinyx.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants