-
-
Notifications
You must be signed in to change notification settings - Fork 334
Emphasis with CJK punctuation #650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This and the above issues are caused by the change in #618. It is mixed in only v0.30 spec. https://spec.commonmark.org/0.30/changes
The definition of left- and -right-franking emphasis for * and ** must use ASCII punctuation characters instead of Unicode ones. does not cause such problem, so remark depended by MDX v2+ is affected. |
Again, there is no change in 618. That PR is just about words, terminology. MDX 1 did not follow CM correctly and had other bugs. Can you please read what I say, and please stop spamming, and actually contribute? |
The extension by MDX is not the culprit. https://codesandbox.io/s/remark-playground-wmfor?file=/package.json As of Not reproduced in The latest Prettier (uses
This means that the credit for the change goes to the fact that it turns to be clear that this specification is a terrible one that should be revised. Old |
https://spec.commonmark.org/0.29/
You are right. I'm sorry. I will look for another version. |
I finally found that the current broken definition sentences were introduced in 0.14. https://spec.commonmark.org/0.14/changes https://spec.commonmark.org/0.13/ I will investigate why these are introduced. |
https://github.com/commonmark/commonmark-spec/blob/0.14/changelog.spec.txt
http://talk.commonmark.org/t/903/6
Note I replaced the link with a cache by the Wayback machine. I conclude that this problem was caused by a lack of consideration for Chinese and Japanese by |
I would like to ask them why they included non-ASCII punctuation characters and why only ASCII punctuation characters are not sufficient. |
I will blame https://github.com/vfmd/vfmd-spec/blob/gh-pages/specification.md later. The test cases in vfmd considered only ASCII punctuation. |
I found the commit containing the initial definition in the spec of vfmd:
|
@tats-u dude, here and in your comments on #618 you come off as arrogant and very disrespectful. You make absolutist claims and then frequently correcting yourself because it turns out you didn't do your homework. You need to have the humility to realize that your perception that "something broke or is broken" might have to do with you not understanding one or more of the following (I don't have the time to figure out which ones, the responsibility is on you):
A more reasoned, respectful and helpful approach would be to have a discussion with other people who are affected by what you claim is broken, including the makers and other users of the downstream tool that you claim is now broken. Diagnose the problem with them, assuming they agree with you that there is a problem, before making a claim that the source of the problem is upstream in CommonMark. If it turns out that you are alone in this, that should tell you something. |
@tats-u This issue is still open, so indeed it is looking for a solution. It is also something I have heard from others. However, it is not easy to solve. There are also legitimate cases where you do want to use an asterisk or underscore but don’t want it to result in emphasis/strong. Also in East-Asian languages. One idea I have, that could potentially help emphasis/strong, is the Unicode line breaking algorithm: https://unicode.org/reports/tr14/. |
@vassudanagunta I had got too much angry at that time. I do think it was over the limit now.
Let me say there are never in each framework. This problem can be reproduced in the most major JS Markdown frameworks, remark (unified) and markdown-it. Remark-related issues that I have raised are closed immediately with the reason that they are on spec.
I never have. This is why I have looked into the background and the impact of my proposed changes now.
It looks like a lot of work to study the impact of breaking changes and decide whether or not to apply them.
Due to this problem, it became necessary for me (us) to tell all Japanese (and some Chinese) Markdown writers to refrain from surrounding whole sentences with <!-- What would you feel if Markdown would not recognize ** here as <strong> if you remove 4 or 5 spaces? -->
**Don't surround the whole sentence with the double-asterisk without adding extra spaces!** The Foobar language which is spoken by most CommonMark maintainers use as many as 6 spaces to split sentences.
This is what I have looked into by digging through rummaging through the Git history, change logs, and test cases now.
It is not surprising that maintainers and you lower the priority of this problem, since it does not affect any European language family, which puts space next to punctuation or parentheses.
I clearly doubt this. @wooorm I apologize again at this time for my anger and for being too militant in my remarks. My humble suggestions and comments on them:
I know. It is the background of this problem.
I have looked for ones and their frequency. Escaping them does not modify the rendered content itself, but I have been disgusted of having to modify the content by adding extra space or to depend on the inline raw JSX tag (
I will look into it later. (I do not expect it either) |
Checking the general Unicode categories Pc, Pd, Pe, Pf, Pi, Po and Ps, U+3001 Ideographic Comma and U+3002 Ideographic Full Stop are of course included in what Commonmark considers punctuation marks, which are all treated alike. For its definitions of flanking, CM could start to handle Open/Start Possibly affected Examples are, for instance: 363, 367+368, 371+372, 376 and 392–394. |
I checked the raised test cases. 367 is most affected in them. However, there are some ones not raised but more important. I am not convinced in the test case 378 (
Does it not mean that FYI, as of https://hypestat.com/info/github.com, one in six visitors in GitHub live in China or Japan. This percentage would not be able to be ignored or underestimated. |
The “Permitted content: Phrasing content” bit allow it for both.
I don’t think anybody is underestimating that. Practically, this is also open source, which implies that somebody has to do the work for free here, probably because they think it’s fun or important to do. And then folks working on markdown parsers need to do it too. To illustrate, GitHub hasn’t really done anything in the last 3 years (just security vulnerabilities / new fancy footnote footnotes feature). |
Getting emphasis right in markdown (especially nested emphasis) is very difficult. Changing the existing rules without messing up cases that currently work is highly nontrivial. For what it's worth, my rationalized syntax djot has simpler rules for emphasis, gives you what you want in the above Japanese example, and allows you to use braces to clarify nesting in cases where it's unclear, e.g. |
This is technically possible but not practical or necessary. It is much easier and faster to type "「" & "」" from the keyboard directly, and you cannot copy these brackets in
Almost all description on Markdown for newbies including the following say that
I do not know of SaaSes in Japan that customize the style of The current behavior of CommonMark forces newbies in China or Japan to try to decipher its spec. It is for developers of Markdown parsers, not for users except for experts. CommonMark has now grown to the point where it can manipulate the largest Markdown implementations (remark, markdonw-it, goldmark (used by Hugo), commonmarker (possibly used by GitHub), and so on) from behind the scenes. We may well lobby to revise its specification. (unenforceable of course though!) It would not be difficult to create a new specification of Markdown, but is difficult to give sufficient power to it. These are why I had tried to stop the left- and right-flanking, but I have found a convincing plan to recently. We have only to change by my plan:
We do not have to change the other. I hope most Chinese and Japanese can be convinced by it. Also, you can continue to nest
I am a little relieved to hear that. I apologize for the misunderstanding.
It would affect too many documents if the left- & right-flanking rule were abolished. However, the new plan will not affect on most existing documents except for ones that abuse the details of the spec. Do you mean that they are also included in "all existing" ones? I suggest new terms "punctuation run preceded by space" & "puncuation run followed by space".
(2a) and (2b) is going to be changed like the following:
This change treats punctuation characters that are not adjacent to space as normal letters. To see if the " **これは太字になりません。**ご注意ください。
カッコに注意**(太字にならない)**文が続く場合に要警戒。
**[リンク](https://example.com)**も注意。(画像も同様)
先頭の**`コード`も注意。**
**末尾の`コード`**も注意。 Also, we can parse even the following English as intended: You should write “John**'s**” instead. We do not concatenate too many punctuation characters, so we do not have to search more than ten and some (e.g. 16) punctuation characters for space from the previous or next of the target delimiter run. To check if the delimiter run is "the last characters in punctuation run preceded by space" (without using cache): flowchart TD
Next{"Is the<br>next character<br>an Unicode punctuation<br>chracter?"}
Next--> |YES| F["<code>return false</code>"]
Next--> |NO| Init["<code>current =</code><br>(previous character)<br><code>n =</code><br>(Length of delimiter run)"]
Init--> Exceed{"<code>n >= 16</code>?"}
Exceed--> |YES| F
Exceed --> |NO| Previous{"What type is <code>current</code>?"}
Previous --> |Not punctuation or space| F
Previous --> |Space| T["<code>return true</code>"]
Previous --> |Unicode punctuation| Iter["<code>n++<br>current =</code><br>(previous character)"]
Iter --> Exceed
In the current spec, to non-advanced users especially in China or Japan, " |
0.31 changes the wording slightly, but as far as I can tell this does not change flanking behavior at all.
|
The change made the situation even worse.
The few improvements are only that it is easier to explain the condition to beginners (we can now use the single word “symbols”) and more consistent with ASCII punctuation characters. |
This particular change was not intended to address this issue; it was just intended to make things more consistent. @tats-u I am sorry, I have not yet had time to give your proposal proper consideration. |
I guess it, but as a result it did cause a breaking change and break some documents (much less than ones affected by 0.14 though), which is a kind of regressions you have mostly feared and cared about. For the first place, we cannot easily access to convincing and practical examples that describe how legitimate controversial parts of specifications and changes are; we can easily find only ones that are designed only for testing and do not have meaning (e.g. What is needed is like: Price: **€**10 per month (note: you cannot pay in US$!)
FYI you do not have evaluate how optimize the algorithm in the above flowchart; it is too naive and can be optimized. All I want you to do first is to evaluate how acceptable breaking changes brought by my revision are. It might be better for me to make a PoC to make it easy to do it. |
To be honest, I didn't anticipate these breaking changes, and I would have thought twice about the change if I had. Having a parser to play with that implements your idea would make it easier to see what its consequences would be. (Ideally, a minimally altered cmark or commonmark.js.) It's also important to have a plan that can be implemented without significantly degrading the parser's performance. But my guess is that if it's just a check that has to be run once for each delimiter + punctuation run, it should be okay. |
New package with Korean support has just been arrived: https://www.npmjs.com/package/markdown-it-cjk-friendly The previous package is now deprecated. |
remark plugin & micromark extension have just been released: |
Specifications revision RFC: https://github.com/tats-u/markdown-cjk-friendly/pull/3/files I'm going to:
If no one leaves a comment, I will merge the branch, release new package versions, and finalize the specifications as those without 2 characters lookahead. |
I found single and double quotes (‘’ & “”; U+2018-2019 & U+201C-201D) can follow a variation selector U+FE01 to be specified they are full-width CJK forms. https://www.unicode.org/Public/16.0.0/ucd/StandardizedVariants.txt
They have just added in the latest Unicode 16. No such entries in Unicode 15: https://www.unicode.org/Public/15.0.0/ucd/StandardizedVariants.txt This means it is more necessary to check the 2 next character to check whether the next character is CJK or not. |
Sorry, I haven't had time to test your markdown-cjk-friendly. Maybe this weekend I can try to introduce it in the to-the-stars translation project, which is a novel with over 800,000 words and has encountered many related issues. |
You can try the current version. Or should I release pre-release versions reflecting the new specs candidate? |
The new candidate approaches only (preceding) punctuation characters with variation selectors (#798). I think most people don't or probably haven't input them. Quotations with variation selectors haven't been considered there yet. |
If possible, I would prefer to try the pre-release version of the new specification candidate, after all, is there any need to test twice? I quickly checked, and “您已到达_la grotte éclatante_(荧光洞窟)站,” 提示音后,有轨电车缓缓停了下来。 “您已到达_la grotte éclatante_(荧光洞窟)站,” 提示音后,有轨电车缓缓停了下来。 Not sure if it is related to #798. |
I consider trying to release ones.
In the first place, _ is effective only where both ends are surrounded by either punctuation or space. __注意__:__句子__
__句子__。__句子__。__句子。__ |
I am currently indeed using extra spaces on both ends, and then cleaning them up when rendering to HTML. So it is still not helpful for this use case, right? |
You can use “您已到达*la grotte éclatante*(荧光洞窟)站,” 提示音后,有轨电车缓缓停了下来。 达 is incompatible; not punctuation or space |
https://www.unicode.org/L2/L2023/23212r-quotes-svs-proposal.pdf VS1: U+FE00 (non-CJK) |
I released pre-release versions as the npm i -D markdown-it-cjk-friendly@next
npm i -D remark-cjk-friendly@next |
Swift-flavored cmark is trying to implement my suggestion: swiftlang/swift-cmark#78 |
FYI, Typst has a similar need and simply judged through script: |
@ArcticLampyrid It's not sufficient because that method lacks some characters shared by multiple CJK scripts. Fullwidth alphenumerics are not covered. |
Hello! @tats-u already linked it, but i’ve written up an implementation of the current version of his spec proposal for our library swift-cmark: swiftlang/swift-cmark#78 swift-cmark is based on GitHub-Flavored Markdown, which is itself based on the C cmark implementation, so this could serve as a sort of reference implementation if the current iteration doesn’t change too much. (I did write a helper script in Swift, though, since i’m personally more comfortable with that than with Perl or Python! 😅 Not sure if that would want to change for an eventual implementation.) I’m gonna follow this thread in case something important changes that i need to update. Thanks for all the work so far! |
rc.2 is released for my packages. tats-u/markdown-cjk-friendly#3 See https://www.unicode.org/L2/L2023/23212r-quotes-svs-proposal.pdf for the details on character sequences that have just been supported. Code changes: tats-u/markdown-cjk-friendly@e7bad67 Specs diff from |
New stable versions have been released; no behavior changes since rc.2. |
Hi, I encountered some strange behavior when using CJK full-width punctuation and trying to add emphasis.
Original issue here
Example punctuation that causes this issue:
。!?、
To my mind, all of these should work as emphasis, but some do and some don't:
I'm not sure if this is the spec as intended, but in Japanese, as a general rule there are no spaces in sentences, which leads to the following kind of problem when parsing emphasis.
In English, this is emphasized as expected:
This is **what I wanted to do.** So I am going to do it.
But the same sentence emphasized in the same way in Japanese fails:
これは**私のやりたかったこと。**だからするの。
The text was updated successfully, but these errors were encountered: