Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support torch.compile in enet, fbnet and 3d-unet pytorch samples #189

Open
wants to merge 422 commits into
base: main
Choose a base branch
from

Conversation

dvrogozh
Copy link
Contributor

No description provided.

jiayisunx and others added 30 commits May 5, 2024 15:04
* support LCM int8

* fix LCM int8-bf16

* fix LCM int8

* update README
… files (#2022)

* ipex/efficientnet: extract get_system_config to separate module

* ipex/efficientnet: move system_config.py to common folder

* common: rename system_config.py to js_sysinfo.py

* common: add js_merge.py

js_merge is a tool to merge few .json output files together
preserving all unique values. Values type mismatch is considered
a fatal error. With few values of the same type for the same key
only one of the first input is kept with the warning printed out.

* common/sysinfo: collect docker info

* common/sysinfo: collect dkms info

* common/sysinfo: collect dpkg info for key packages

* common: add readme

* common/sysinfo: add svr-info support

* common/sysinfo: differentiate more between docker and baremetal

* common/sysinfo: break get_docker_info into 2 functions

* common/sysinfo: report key hardware information

* common/sysinfo: add lshw support and fetch memory info

* common/sysinfo: parse lshw to get cpu info

* core/sysinfo: expand cpu info configuration

* common/sysinfo: expand configuration for memory info

* common/sysinfo: add -o option and improve messaging

* common: update readme for sysinfo

---------

Signed-off-by: Dmitry Rogozhkin <[email protected]>
* make num_iter flexbile

* bugfix for bert-large ddp

* bkc for rn50 ddp training update

* bkc for rn50 ddp training update

* bkc for dlrm_v1 ddp training update

* bugfix for llms output

---------

Co-authored-by: mahathis <[email protected]>
* Imported and ran through image recognition and language modelling for Tensorflow CPU workloads

Co-authored-by: Mahathi Vatsal <[email protected]>
* enable yolov5 on CPU

Co-authored-by: nick.camarena <[email protected]>
Co-authored-by: Clayne Robison <[email protected]>
Co-authored-by: Jitendra Patil <[email protected]>
model script
runer and setup shell script
readme
helper to get dataset
test for container
Run as:
  sudo \
    IMAGE=enet \
    OUTPUT_DIR=/tmp/output \
    PROFILE=$(pwd)/models_v2/pytorch/efficientnet/inference/gpu/profiles/b0.bf16.csv \
    PYTHONPATH=$(pwd)/models_v2/common
      $(pwd)/models_v2/pytorch/efficientnet/inference/gpu/benchmark.sh

This commit also adds dummy and framework fields to efficientnet
results output and fixes stdev naming in couple places.

Co-authored-by: Voas, Tanner <[email protected]>
Signed-off-by: Dmitry Rogozhkin <[email protected]>
Signed-off-by: Voas, Tanner <[email protected]>
Added new tool json_to_csv to dump multiple json objects to a single
CSV (will serialize the json objects)

Signed-off-by: Voas, Tanner <[email protected]>
Signed-off-by: Dmitry Rogozhkin <[email protected]>
Co-authored-by: Voas, Tanner <[email protected]>
Align summary_utils to efficientnet
Combine functions for dummy / non-dummy inputs
* remove ipex for inductor

* fix calibration no prompt for inductor

* add fp16 for llm

* llm torch.compile forward only
- use descriptive variable names iteration and test rather than i and t in the loop
- remove unwanted try/catch statements in common code

Signed-off-by: Voas, Tanner <[email protected]>
Co-authored-by: mahathis <[email protected]>
* Add dummy mode to swin transformer

* Random data, no dataset needed in dummy mode
This patch adds some external metadata to benchmark results.

Signed-off-by: Dmitry Rogozhkin <[email protected]>
* update bkc for pvc itex 2.15.0.0
* update bkc for atsm itex 2.15.0.0
* TF 2.15.0 Flex containers (#2087)
* validate flex 170 and 140
* Updated baremetal for itex 2.15 (#2098)
---------
Co-authored-by: XumingGai <[email protected]>
Co-authored-by: Srikanth Ramakrishna <[email protected]>
Co-authored-by: Mahathi Vatsal <[email protected]>
- add new telemetry.py tool for capturing telemetry
  - Start SMI telemetry capture as its own process inside benchmark.py
  - Support UNIX socket communication and python multiprocessing PIPEs
    for external control of telemetry start, stop, and termination
- Add requirements to efficientnet sample to work with this
- Add processing code to convert the output CSV into a JSON file
- mentioned metadata in benchmark.py readme

UNIX socket API implemented by Dmitry Rogozhkin <[email protected]>
UNIX socket API adapted into commit by Voas, Tanner <[email protected]>

Co-authored-by: Dmitry Rogozhkin <[email protected]>
Signed-off-by: Voas, Tanner <[email protected]>
- Core inference has been moved to its own class Inference.
- Replaced "NUM_IMAGES" and "NUM_ITERATIONS" with single param "NUM_INPUTS"
  - Aligns with other samples usage model (IFRNet, RIFE)
  - "NUM_INPUTS" is functionally same as old "NUM_IMAGES" was
  - "NUM_ITERATIONS" is 1 in accuracy mode and is dynamic in benchmark
    mode based on specified min/max test durations.
- Added support to PyTorch EfficientNet samples to specify min and max test duration
- Logs raw perf with finer granularity now since we have it available
- Use test duration in benchmark for enet
- Remove unused quantization code paths from code

Signed-off-by: Voas, Tanner <[email protected]>
Mahathi-Vatsal and others added 23 commits July 23, 2024 10:55
…(#2370)

Bumps [torch](https://github.com/pytorch/pytorch) from 1.13.1 to 2.2.0.
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](pytorch/pytorch@v1.13.1...v2.2.0)

---
updated-dependencies:
- dependency-name: torch
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
…ws (#2374)

Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.25.13 to 3.25.15.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](github/codeql-action@v3.25.13...v3.25.15)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…(#2375)

Bumps [ossf/scorecard-action](https://github.com/ossf/scorecard-action) from 2.3.3 to 2.4.0.
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](ossf/scorecard-action@v2.3.3...v2.4.0)

---
updated-dependencies:
- dependency-name: ossf/scorecard-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
* Update main README

* Update main README

* Updated README

* Merge pull request intel#184 from intel/mahathi/update_readme

Update README

* Fix torch version to fix dependabot issue

* Merge pull request intel#185 from intel/mahathi/fix_dependabot_issues

Fix torch version to fix dependabot issue
* refactor gpu

* refactor tf

* add tf max-gpu

* respond to lint errors

* remove max-gpu folder

* add cuda models to pytorch
Bumps [keras](https://github.com/keras-team/keras) from 2.6.0rc3 to 2.13.1rc0.
- [Release notes](https://github.com/keras-team/keras/releases)
- [Commits](keras-team/keras@v2.6.0-rc3...v2.13.1-rc0)

---
updated-dependencies:
- dependency-name: keras
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
…ows (#2382)

* Bump super-linter/super-linter from 6.7.0 to 6.8.0 in /.github/workflows

Bumps [super-linter/super-linter](https://github.com/super-linter/super-linter) from 6.7.0 to 6.8.0.
- [Release notes](https://github.com/super-linter/super-linter/releases)
- [Changelog](https://github.com/super-linter/super-linter/blob/main/CHANGELOG.md)
- [Commits](super-linter/super-linter@v6.7.0...v6.8.0)

---
updated-dependencies:
- dependency-name: super-linter/super-linter
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>

* Resolved super linter issues

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Mahathi Vatsal <[email protected]>
* specify the version of sympy (#2373)
…dated (#2389)

* Removed all instances of miniconda, unvalidated

* fixed miniforge base image path
* Removed all instances of miniconda, unvalidated

*Also replaced all intel channel to the specific software repo link
Getting 217 fps on fp32 with torch.compile vs. 187 on eager mode
running on PVC.

Signed-off-by: Dmitry Rogozhkin <[email protected]>
Getting 300 fps on fp32 with torch.compile vs. 187 on eager mode
running on PVC.

Signed-off-by: Dmitry Rogozhkin <[email protected]>
Signed-off-by: Dmitry Rogozhkin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.