Skip to content

Conversation

@jdhenaos
Copy link
Contributor

@jdhenaos jdhenaos commented Dec 5, 2024

I have added MedShapeNet dataset, adding eight classes.

The next folders were ignored from MedShapeNet Dataset due to a lack of samples:

ASOCA (n=43)
AVT (n=45)
AutoImplantCraniotomy (n=14)
FaceVR (n=14)

This Dataset requires a size number as a parameter to download the same number of samples per shape to ensure class balance.

MedShapeNet is not divided by train and test by default. Therefore, train data is downloaded from the first n required number of samples, and the test data is downloaded from the last n required number of samples.

dgcnn classification example was modified to use both ModelNet and MedShapeNet.

@jdhenaos jdhenaos requested a review from wsad1 as a code owner December 5, 2024 20:49
@puririshi98
Copy link
Contributor

“MedShapeNet is not divided by train and test by default. Therefore, train data is downloaded from the first n required number of samples, and the test data is downloaded from the last n required number of samples.”
does this mean it is not doing a random split?

i would recommend the data be split randomly if train/val/test is not predefined

@puririshi98
Copy link
Contributor

root@bldgs-dune-b00-mab-3:/workspace/pytorch_geometric# python3 examples/dgcnn_classification.py -h
Traceback (most recent call last):
  File "/workspace/pytorch_geometric/examples/dgcnn_classification.py", line 8, in <module>
    import torch_geometric.transforms as T
  File "/usr/local/lib/python3.12/dist-packages/torch_geometric/__init__.py", line 21, in <module>
    import torch_geometric.datasets
  File "/usr/local/lib/python3.12/dist-packages/torch_geometric/datasets/__init__.py", line 83, in <module>
    from .medshapenet import MedShapeNet
  File "/usr/local/lib/python3.12/dist-packages/torch_geometric/datasets/medshapenet.py", line 8, in <module>
    from MedShapeNet import MedShapeNet as msn
ModuleNotFoundError: No module named 'MedShapeNet'
root@bldgs-dune-b00-mab-3:/workspace/pytorch_geometric# pip install medshapenet
DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/lightning_utilities-0.11.8-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330
DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/faiss-1.7.4-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330
DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/pyg_lib-0.4.0-py3.12-linux-aarch64.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330
DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/looseversion-1.3.0-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330
DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/igraph-0.11.8-py3.12-linux-aarch64.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330
DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/lightning_thunder-0.2.0.dev0-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330
DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/opt_einsum-3.4.0-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330
DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/texttable-1.7.0-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330
DEPRECATION: Loading egg at /usr/local/lib/python3.12/dist-packages/nvfuser-0.2.13a0+0d33366-py3.12-linux-aarch64.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting medshapenet
  Downloading MedShapeNet-0.1.25-py3-none-any.whl.metadata (5.8 kB)
Collecting minio>=7.2.8 (from medshapenet)
  Downloading minio-7.2.12-py3-none-any.whl.metadata (6.5 kB)
Requirement already satisfied: tqdm>=4.66.5 in /usr/local/lib/python3.12/dist-packages (from medshapenet) (4.67.0)
Collecting numpy<1.26.0,>=1.18.5 (from medshapenet)
  Downloading numpy-1.25.2.tar.gz (10.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.8/10.8 MB 90.6 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [33 lines of output]
      Traceback (most recent call last):
        File "/usr/local/lib/python3.12/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/usr/local/lib/python3.12/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/usr/local/lib/python3.12/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 112, in get_requires_for_build_wheel
          backend = _build_backend()
                    ^^^^^^^^^^^^^^^^
        File "/usr/local/lib/python3.12/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
          obj = import_module(mod_path)
                ^^^^^^^^^^^^^^^^^^^^^^^
        File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
          return _bootstrap._gcd_import(name[level:], package, level)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
        File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
        File "<frozen importlib._bootstrap>", line 1310, in _find_and_load_unlocked
        File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
        File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
        File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
        File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
        File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
        File "<frozen importlib._bootstrap_external>", line 995, in exec_module
        File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
        File "/tmp/pip-build-env-51s0gzdh/overlay/local/lib/python3.12/dist-packages/setuptools/__init__.py", line 16, in <module>
          import setuptools.version
        File "/tmp/pip-build-env-51s0gzdh/overlay/local/lib/python3.12/dist-packages/setuptools/version.py", line 1, in <module>
          import pkg_resources
        File "/tmp/pip-build-env-51s0gzdh/overlay/local/lib/python3.12/dist-packages/pkg_resources/__init__.py", line 2172, in <module>
          register_finder(pkgutil.ImpImporter, find_on_path)
                          ^^^^^^^^^^^^^^^^^^^
      AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

note that CI is also failing with this issue.
to fix CI see how the imports are done for webqsp: https://github.com/pyg-team/pytorch_geometric/blob/master/torch_geometric/datasets/web_qsp_dataset.py#L156-L157

however, I am having trouble installing medshapenet through pip to verify if your code actually works. Is that how you install it? could you provide your setup commands? based on this i will also advise wether those instructions should be included in your code somewhere (i will decide based on the complexity of the set up instructions)

@codecov
Copy link

codecov bot commented Dec 10, 2024

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 85.43%. Comparing base (c211214) to head (95a6fc5).
⚠️ Report is 87 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #9823      +/-   ##
==========================================
- Coverage   86.11%   85.43%   -0.68%     
==========================================
  Files         496      496              
  Lines       33655    34007     +352     
==========================================
+ Hits        28981    29055      +74     
- Misses       4674     4952     +278     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@xnuohz
Copy link
Contributor

xnuohz commented Dec 22, 2024

Should it be pip install MedShapeNet?
https://github.com/GLARKI/MedShapeNet2.0

@jdhenaos
Copy link
Contributor Author

Indeed, pip instal MedShapeNet works.

The problem is that MedShapeNet uses a down-grade version of numpy. By default, it downloads version 1.25.2, which works well with Python 3.10 but raises an error when recent versions like 3.12 are used.

I'm not sure how to proceed with this conflict. Any hint you can give will be welcome.

@jdhenaos
Copy link
Contributor Author

@xnuohz and @puririshi98

I have talked with the MedShapeNet. Looks like they already solved the problem, I was able to run the Dataset on Python 3.10.12.

Please corroborate and let me know if there is some pending issue to solve

Copy link
Contributor

@xnuohz xnuohz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left some comments:)

@jdhenaos
Copy link
Contributor Author

@xnuohz

Thanks for your time checking the code and adding the last suggestions.

I have already worked in the suggested changes:

  1. Now the whole dataset is loaded into a single variable.
  2. Externally the dataset is divided into train and test. I used the random_split from torch with a fixed random seed to provide reproducibility in the dgcnn_classification example.

@xnuohz
Copy link
Contributor

xnuohz commented May 1, 2025

@jdhenaos thank you. can you share the train log?

@jdhenaos
Copy link
Contributor Author

jdhenaos commented May 2, 2025

@xnuohz the train log:

The root is:  ./data/medshapenet
The Dataset is:  medshapenet
Loading dataset

        This message only displays once when importing MedShapeNet for the first time.

        MedShapeNet API is under construction, more functionality will come soon!

        For information use MedShapeNet.msn_help().
        Alternatively, check the GitHub Page: https://github.com/GLARKI/MedShapeNet2.0

        PLEASE CITE US If you used MedShapeNet API for your (research) project:
        
        @article{li2023medshapenet,
        title={MedShapeNet--A Large-Scale Dataset of 3D Medical Shapes for Computer Vision},
        author={Li, Jianning and Pepe, Antonio and Gsaxner, Christina and Luijten, Gijs and Jin, Yuan and Ambigapathy, Narmada, and others},
        journal={arXiv preprint arXiv:2308.16139},
        year={2023}
        }

        PLEASE USE the def dataset_info(self, bucket_name: str) to find the proper citation alongside MedShapeNet when utilizing a dataset for your resarch project.
        
Connection to MinIO server successful.

Download directory already exists at: /home/juan/Documents/projects/msn_downloads
Running model
Epoch 001, Loss: 2.0634, Test: 0.3750
Epoch 002, Loss: 1.8531, Test: 0.4083
Epoch 003, Loss: 1.4007, Test: 0.4333
Epoch 004, Loss: 1.0079, Test: 0.6250
Epoch 005, Loss: 0.7663, Test: 0.8917
Epoch 006, Loss: 0.4527, Test: 0.8917
Epoch 007, Loss: 0.3515, Test: 0.9083
Epoch 008, Loss: 0.3017, Test: 0.9333
Epoch 009, Loss: 0.2130, Test: 0.9667
Epoch 010, Loss: 0.1129, Test: 0.9417
Epoch 011, Loss: 0.1091, Test: 0.9833
Epoch 012, Loss: 0.0803, Test: 0.9833
Epoch 013, Loss: 0.0464, Test: 0.9833
Epoch 014, Loss: 0.0645, Test: 0.9583
Epoch 015, Loss: 0.0741, Test: 0.9750
Epoch 016, Loss: 0.0362, Test: 1.0000
Epoch 017, Loss: 0.0496, Test: 0.9500
Epoch 018, Loss: 0.0720, Test: 0.9667
Epoch 019, Loss: 0.0430, Test: 0.9833
Epoch 020, Loss: 0.0476, Test: 0.9667
Epoch 021, Loss: 0.0186, Test: 0.9667
Epoch 022, Loss: 0.0263, Test: 1.0000
Epoch 023, Loss: 0.0095, Test: 1.0000
Epoch 024, Loss: 0.0105, Test: 0.9917
Epoch 025, Loss: 0.0076, Test: 1.0000
Epoch 026, Loss: 0.0053, Test: 1.0000
Epoch 027, Loss: 0.0041, Test: 0.9917
Epoch 028, Loss: 0.0034, Test: 0.9917
Epoch 029, Loss: 0.0020, Test: 0.9833
Epoch 030, Loss: 0.0025, Test: 1.0000
Epoch 031, Loss: 0.0015, Test: 0.9917
Epoch 032, Loss: 0.0022, Test: 0.9917
Epoch 033, Loss: 0.0015, Test: 1.0000
Epoch 034, Loss: 0.0031, Test: 1.0000
Epoch 035, Loss: 0.0237, Test: 0.9917
Epoch 036, Loss: 0.0044, Test: 0.9750
Epoch 037, Loss: 0.0102, Test: 1.0000
Epoch 038, Loss: 0.0052, Test: 1.0000
Epoch 039, Loss: 0.0020, Test: 1.0000
Epoch 040, Loss: 0.0044, Test: 1.0000
Epoch 041, Loss: 0.0107, Test: 0.9917
Epoch 042, Loss: 0.0407, Test: 0.9917
Epoch 043, Loss: 0.0237, Test: 0.9833
Epoch 044, Loss: 0.0017, Test: 0.9917
Epoch 045, Loss: 0.0046, Test: 0.9833
Epoch 046, Loss: 0.0012, Test: 0.9917
Epoch 047, Loss: 0.0020, Test: 0.9917
Epoch 048, Loss: 0.0005, Test: 0.9917
Epoch 049, Loss: 0.0011, Test: 1.0000
Epoch 050, Loss: 0.0026, Test: 0.9917
Epoch 051, Loss: 0.0016, Test: 1.0000
Epoch 052, Loss: 0.0006, Test: 1.0000
Epoch 053, Loss: 0.0017, Test: 0.9917
Epoch 054, Loss: 0.0026, Test: 1.0000
Epoch 055, Loss: 0.0008, Test: 1.0000
Epoch 056, Loss: 0.0007, Test: 1.0000
Epoch 057, Loss: 0.0012, Test: 0.9917
Epoch 058, Loss: 0.0017, Test: 1.0000
Epoch 059, Loss: 0.0012, Test: 1.0000
Epoch 060, Loss: 0.0008, Test: 0.9917
Epoch 061, Loss: 0.0005, Test: 1.0000
Epoch 062, Loss: 0.0014, Test: 0.9917
Epoch 063, Loss: 0.0007, Test: 0.9917
Epoch 064, Loss: 0.0021, Test: 0.9917
Epoch 065, Loss: 0.0003, Test: 1.0000
Epoch 066, Loss: 0.0003, Test: 1.0000
Epoch 067, Loss: 0.0004, Test: 0.9917
Epoch 068, Loss: 0.0010, Test: 0.9917
Epoch 069, Loss: 0.0003, Test: 1.0000
Epoch 070, Loss: 0.0016, Test: 0.9917
Epoch 071, Loss: 0.0009, Test: 1.0000
Epoch 072, Loss: 0.0010, Test: 1.0000
Epoch 073, Loss: 0.0005, Test: 0.9917
Epoch 074, Loss: 0.0085, Test: 1.0000
Epoch 075, Loss: 0.0008, Test: 1.0000
Epoch 076, Loss: 0.0024, Test: 1.0000
Epoch 077, Loss: 0.0025, Test: 0.9917
Epoch 078, Loss: 0.0010, Test: 0.9917
Epoch 079, Loss: 0.0007, Test: 0.9833
Epoch 080, Loss: 0.0012, Test: 0.9833
Epoch 081, Loss: 0.0007, Test: 0.9833
Epoch 082, Loss: 0.0006, Test: 0.9917
Epoch 083, Loss: 0.0009, Test: 0.9917
Epoch 084, Loss: 0.0006, Test: 0.9917
Epoch 085, Loss: 0.0006, Test: 0.9917
Epoch 086, Loss: 0.0006, Test: 0.9917
Epoch 087, Loss: 0.0007, Test: 0.9833
Epoch 088, Loss: 0.0002, Test: 0.9917
Epoch 089, Loss: 0.0007, Test: 0.9917
Epoch 090, Loss: 0.0002, Test: 1.0000
Epoch 091, Loss: 0.0025, Test: 0.9917
Epoch 092, Loss: 0.0005, Test: 0.9833
Epoch 093, Loss: 0.0011, Test: 0.9917
Epoch 094, Loss: 0.0004, Test: 0.9917
Epoch 095, Loss: 0.0010, Test: 1.0000
Epoch 096, Loss: 0.0009, Test: 0.9917
Epoch 097, Loss: 0.0010, Test: 0.9917
Epoch 098, Loss: 0.0006, Test: 1.0000
Epoch 099, Loss: 0.0037, Test: 0.9917
Epoch 100, Loss: 0.0007, Test: 0.9917
Epoch 101, Loss: 0.0010, Test: 1.0000
Epoch 102, Loss: 0.0007, Test: 0.9917
Epoch 103, Loss: 0.0023, Test: 1.0000
Epoch 104, Loss: 0.0014, Test: 0.9917
Epoch 105, Loss: 0.0013, Test: 1.0000
Epoch 106, Loss: 0.0006, Test: 0.9917
Epoch 107, Loss: 0.0004, Test: 1.0000
Epoch 108, Loss: 0.0002, Test: 1.0000
Epoch 109, Loss: 0.0010, Test: 0.9917
Epoch 110, Loss: 0.0004, Test: 1.0000
Epoch 111, Loss: 0.0035, Test: 0.9917
Epoch 112, Loss: 0.0003, Test: 1.0000
Epoch 113, Loss: 0.0004, Test: 0.9833
Epoch 114, Loss: 0.0003, Test: 0.9917
Epoch 115, Loss: 0.0004, Test: 0.9917
Epoch 116, Loss: 0.0011, Test: 0.9833
Epoch 117, Loss: 0.0075, Test: 1.0000
Epoch 118, Loss: 0.0008, Test: 1.0000
Epoch 119, Loss: 0.0006, Test: 0.9917
Epoch 120, Loss: 0.0024, Test: 1.0000
Epoch 121, Loss: 0.0003, Test: 1.0000
Epoch 122, Loss: 0.0017, Test: 0.9917
Epoch 123, Loss: 0.0003, Test: 1.0000
Epoch 124, Loss: 0.0003, Test: 0.9917
Epoch 125, Loss: 0.0018, Test: 1.0000
Epoch 126, Loss: 0.0002, Test: 0.9917
Epoch 127, Loss: 0.0009, Test: 1.0000
Epoch 128, Loss: 0.0007, Test: 0.9917
Epoch 129, Loss: 0.0005, Test: 1.0000
Epoch 130, Loss: 0.0007, Test: 0.9917
Epoch 131, Loss: 0.0003, Test: 0.9917
Epoch 132, Loss: 0.0004, Test: 0.9917
Epoch 133, Loss: 0.0013, Test: 0.9917
Epoch 134, Loss: 0.0009, Test: 0.9917
Epoch 135, Loss: 0.0010, Test: 1.0000
Epoch 136, Loss: 0.0009, Test: 1.0000
Epoch 137, Loss: 0.0003, Test: 0.9917
Epoch 138, Loss: 0.0005, Test: 0.9917
Epoch 139, Loss: 0.0006, Test: 1.0000
Epoch 140, Loss: 0.0004, Test: 0.9917
Epoch 141, Loss: 0.0014, Test: 0.9917
Epoch 142, Loss: 0.0003, Test: 0.9917
Epoch 143, Loss: 0.0002, Test: 1.0000
Epoch 144, Loss: 0.0008, Test: 0.9917
Epoch 145, Loss: 0.0008, Test: 1.0000
Epoch 146, Loss: 0.0008, Test: 1.0000
Epoch 147, Loss: 0.0005, Test: 0.9917
Epoch 148, Loss: 0.0009, Test: 1.0000
Epoch 149, Loss: 0.0025, Test: 0.9917
Epoch 150, Loss: 0.0007, Test: 0.9833
Epoch 151, Loss: 0.0006, Test: 1.0000
Epoch 152, Loss: 0.0043, Test: 0.9917
Epoch 153, Loss: 0.0005, Test: 0.9917
Epoch 154, Loss: 0.0006, Test: 1.0000
Epoch 155, Loss: 0.0013, Test: 1.0000
Epoch 156, Loss: 0.0003, Test: 0.9917
Epoch 157, Loss: 0.0014, Test: 0.9917
Epoch 158, Loss: 0.0002, Test: 0.9917
Epoch 159, Loss: 0.0004, Test: 0.9917
Epoch 160, Loss: 0.0008, Test: 1.0000
Epoch 161, Loss: 0.0002, Test: 0.9917
Epoch 162, Loss: 0.0008, Test: 1.0000
Epoch 163, Loss: 0.0003, Test: 1.0000
Epoch 164, Loss: 0.0009, Test: 0.9917
Epoch 165, Loss: 0.0004, Test: 1.0000
Epoch 166, Loss: 0.0005, Test: 0.9917
Epoch 167, Loss: 0.0005, Test: 1.0000
Epoch 168, Loss: 0.0003, Test: 0.9917
Epoch 169, Loss: 0.0002, Test: 0.9917
Epoch 170, Loss: 0.0003, Test: 0.9917
Epoch 171, Loss: 0.0017, Test: 0.9917
Epoch 172, Loss: 0.0007, Test: 0.9917
Epoch 173, Loss: 0.0003, Test: 0.9917
Epoch 174, Loss: 0.0008, Test: 1.0000
Epoch 175, Loss: 0.0016, Test: 0.9917
Epoch 176, Loss: 0.0024, Test: 0.9917
Epoch 177, Loss: 0.0002, Test: 0.9917
Epoch 178, Loss: 0.0006, Test: 1.0000
Epoch 179, Loss: 0.0005, Test: 0.9917
Epoch 180, Loss: 0.0004, Test: 1.0000
Epoch 181, Loss: 0.0004, Test: 1.0000
Epoch 182, Loss: 0.0003, Test: 1.0000
Epoch 183, Loss: 0.0009, Test: 0.9917
Epoch 184, Loss: 0.0015, Test: 1.0000
Epoch 185, Loss: 0.0013, Test: 0.9917
Epoch 186, Loss: 0.0006, Test: 0.9917
Epoch 187, Loss: 0.0019, Test: 0.9917
Epoch 188, Loss: 0.0004, Test: 1.0000
Epoch 189, Loss: 0.0009, Test: 1.0000
Epoch 190, Loss: 0.0003, Test: 1.0000
Epoch 191, Loss: 0.0014, Test: 0.9917
Epoch 192, Loss: 0.0004, Test: 0.9917
Epoch 193, Loss: 0.0005, Test: 0.9917
Epoch 194, Loss: 0.0019, Test: 1.0000
Epoch 195, Loss: 0.0007, Test: 0.9917
Epoch 196, Loss: 0.0005, Test: 1.0000
Epoch 197, Loss: 0.0006, Test: 0.9917
Epoch 198, Loss: 0.0007, Test: 1.0000
Epoch 199, Loss: 0.0003, Test: 0.9917
Epoch 200, Loss: 0.0005, Test: 1.0000

Copy link
Contributor

@xnuohz xnuohz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add an unit test if the dataset is small and easy to download

Copy link
Contributor

@puririshi98 puririshi98 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm will merge once CI passes

@puririshi98 puririshi98 merged commit 91958a4 into pyg-team:master May 22, 2025
19 checks passed
chrisn-pik pushed a commit to chrisn-pik/pytorch_geometric that referenced this pull request Jun 30, 2025
I have added MedShapeNet dataset, adding eight classes.

The next folders were ignored from MedShapeNet Dataset due to a lack of
samples:

 ASOCA  (n=43)
AVT (n=45)
AutoImplantCraniotomy (n=14)
FaceVR  (n=14)

This Dataset requires a size number as a parameter to download the same
number of samples per shape to ensure class balance.

MedShapeNet is not divided by train and test by default. Therefore,
train data is downloaded from the first n required number of samples,
and the test data is downloaded from the last n required number of
samples.

dgcnn classification example was modified to use both ModelNet and
MedShapeNet.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Rishi Puri <[email protected]>
Co-authored-by: Rishi Puri <[email protected]>

subset = []
for dataset in list_of_datasets:
self.newpath = self.root + '/' + dataset.split("/")[1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got to this PR while reviewing #10472. Does this work on Windows? Could we make it os-agnostic?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @akihironitta

I'm sorry for not getting back to you sooner.

Is this still an open issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants