content
stringlengths 1
103k
โ | path
stringlengths 8
216
| filename
stringlengths 2
179
| language
stringclasses 15
values | size_bytes
int64 2
189k
| quality_score
float64 0.5
0.95
| complexity
float64 0
1
| documentation_ratio
float64 0
1
| repository
stringclasses 5
values | stars
int64 0
1k
| created_date
stringdate 2023-07-10 19:21:08
2025-07-09 19:11:45
| license
stringclasses 4
values | is_test
bool 2
classes | file_hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
name: PHP CI\non:\n push:\n branches:\n - main\n pull_request:\n branches:\n - main\n\njobs:\n ci:\n runs-on: ubuntu-latest\n\n strategy:\n matrix:\n php: [7.3, 7.4, 8.0, 8.1, 8.2]\n\n steps:\n - name: Checkout\n uses: actions/checkout@v3\n \n - name: Setup PHP\n uses: shivammathur/setup-php@8e2ac35f639d3e794c1da1f28999385ab6fdf0fc\n with:\n php-version: ${{ matrix.php }}\n coverage: xdebug\n \n - name: Composer install\n run: composer install\n \n - name: PHP Linting\n run: ./vendor/bin/phpcs --standard=.duo_linting.xml -n src/* tests\n\n - name: PHP tests\n run: ./vendor/bin/phpunit --process-isolation tests\n | dataset_sample\yaml\zabbix_zabbix\ui\vendor\duosecurity\duo_universal_php\.github\workflows\php_ci.yml | php_ci.yml | YAML | 718 | 0.8 | 0 | 0 | awesome-app | 305 | 2023-08-15T09:43:22.327955 | BSD-3-Clause | false | 47ac66a99c154a08d256c74d3a85a8df |
services:\n postgres:\n image: postgres:15\n container_name: zed_postgres\n ports:\n - 5432:5432\n environment:\n POSTGRES_HOST_AUTH_METHOD: trust\n volumes:\n - postgres_data:/var/lib/postgresql/data\n - ./docker-compose.sql:/docker-entrypoint-initdb.d/init.sql\n\n blob_store:\n image: quay.io/minio/minio\n container_name: blob_store\n command: server /data\n ports:\n - 9000:9000\n environment:\n MINIO_ROOT_USER: the-blob-store-access-key\n MINIO_ROOT_PASSWORD: the-blob-store-secret-key\n volumes:\n - ./.blob_store:/data\n\n livekit_server:\n image: livekit/livekit-server\n container_name: livekit_server\n entrypoint: /livekit-server --config /livekit.yaml\n ports:\n - 7880:7880\n - 7881:7881\n - 7882:7882/udp\n volumes:\n - ./livekit.yaml:/livekit.yaml\n\n postgrest_app:\n image: postgrest/postgrest\n container_name: postgrest_app\n ports:\n - 8081:8081\n environment:\n PGRST_DB_URI: postgres://postgres@postgres:5432/zed\n volumes:\n - ./crates/collab/postgrest_app.conf:/etc/postgrest.conf\n command: postgrest /etc/postgrest.conf\n depends_on:\n - postgres\n\n postgrest_llm:\n image: postgrest/postgrest\n container_name: postgrest_llm\n ports:\n - 8082:8082\n environment:\n PGRST_DB_URI: postgres://postgres@postgres:5432/zed_llm\n volumes:\n - ./crates/collab/postgrest_llm.conf:/etc/postgrest.conf\n command: postgrest /etc/postgrest.conf\n depends_on:\n - postgres\n\nvolumes:\n postgres_data:\n | dataset_sample\yaml\zed-industries_zed\compose.yml | compose.yml | YAML | 1,552 | 0.8 | 0 | 0 | awesome-app | 645 | 2024-04-17T11:28:35.427176 | MIT | false | 826f7f5f9a1bd4639ae145c94267d613 |
name: "Check formatting"\ndescription: "Checks code formatting use cargo fmt"\n\nruns:\n using: "composite"\n steps:\n - name: cargo fmt\n shell: bash -euxo pipefail {0}\n run: cargo fmt --all -- --check\n | dataset_sample\yaml\zed-industries_zed\.github\actions\check_style\action.yml | action.yml | YAML | 211 | 0.7 | 0 | 0 | vue-tools | 816 | 2024-10-04T10:10:43.262015 | MIT | false | fd1b6add1fd41b840ba559acf6fd3a04 |
name: "Run tests"\ndescription: "Runs the tests"\n\nruns:\n using: "composite"\n steps:\n - name: Install Rust\n shell: bash -euxo pipefail {0}\n run: |\n cargo install cargo-nextest --locked\n\n - name: Install Node\n uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4\n with:\n node-version: "18"\n\n - name: Limit target directory size\n shell: bash -euxo pipefail {0}\n run: script/clear-target-dir-if-larger-than 100\n\n - name: Run tests\n shell: bash -euxo pipefail {0}\n run: cargo nextest run --workspace --no-fail-fast\n | dataset_sample\yaml\zed-industries_zed\.github\actions\run_tests\action.yml | action.yml | YAML | 595 | 0.8 | 0.043478 | 0 | node-utils | 57 | 2023-09-27T08:23:35.531161 | MIT | true | 9c09e11739cb4a147f430d13f85e04b9 |
name: "Run tests on Windows"\ndescription: "Runs the tests on Windows"\n\ninputs:\n working-directory:\n description: "The working directory"\n required: true\n default: "."\n\nruns:\n using: "composite"\n steps:\n - name: Install Rust\n shell: pwsh\n working-directory: ${{ inputs.working-directory }}\n run: cargo install cargo-nextest --locked\n\n - name: Install Node\n uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4\n with:\n node-version: "18"\n\n - name: Run tests\n shell: pwsh\n working-directory: ${{ inputs.working-directory }}\n run: cargo nextest run --workspace --no-fail-fast --config='profile.dev.debug="limited"'\n | dataset_sample\yaml\zed-industries_zed\.github\actions\run_tests_windows\action.yml | action.yml | YAML | 697 | 0.95 | 0 | 0 | node-utils | 983 | 2024-06-08T21:05:23.452396 | GPL-3.0 | true | d3bcd2ed40e111e6b9e19e19bd30804f |
name: Bug Report (Agent Panel)\ndescription: Zed Agent Panel Bugs\ntype: "Bug"\nlabels: ["agent", "ai"]\ntitle: "Agent Panel: <a short description of the Agent Panel bug>"\nbody:\n - type: textarea\n attributes:\n label: Summary\n description: Describe the bug with a one line summary, and provide detailed reproduction steps\n value: |\n <!-- Please insert a one line summary of the issue below -->\n SUMMARY_SENTENCE_HERE\n\n ### Description\n <!-- Describe with sufficient detail to reproduce from a clean Zed install. -->\n <!-- Please include the LLM provider and model name you are using -->\n Steps to trigger the problem:\n 1.\n 2.\n 3.\n\n Actual Behavior:\n Expected Behavior:\n validations:\n required: true\n\n - type: textarea\n id: environment\n attributes:\n label: Zed Version and System Specs\n description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'\n placeholder: |\n Output of "zed: Copy System Specs Into Clipboard"\n validations:\n required: true\n | dataset_sample\yaml\zed-industries_zed\.github\ISSUE_TEMPLATE\01_bug_agent.yml | 01_bug_agent.yml | YAML | 1,120 | 0.95 | 0 | 0.030303 | awesome-app | 520 | 2024-07-02T21:13:18.491962 | GPL-3.0 | false | 0da85bdb0fed52596965cab18c6542ef |
name: Bug Report (Edit Predictions)\ndescription: Zed Edit Predictions bugs\ntype: "Bug"\nlabels: ["ai", "inline completion", "zeta"]\ntitle: "Edit Predictions: <a short description of the Edit Prediction bug>"\nbody:\n - type: textarea\n attributes:\n label: Summary\n description: Describe the bug with a one line summary, and provide detailed reproduction steps\n value: |\n <!-- Please insert a one line summary of the issue below -->\n SUMMARY_SENTENCE_HERE\n\n ### Description\n <!-- Describe with sufficient detail to reproduce from a clean Zed install. -->\n <!-- Please include the LLM provider and model name you are using -->\n Steps to trigger the problem:\n 1.\n 2.\n 3.\n\n Actual Behavior:\n Expected Behavior:\n validations:\n required: true\n\n - type: textarea\n id: environment\n attributes:\n label: Zed Version and System Specs\n description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'\n placeholder: |\n Output of "zed: Copy System Specs Into Clipboard"\n validations:\n required: true\n | dataset_sample\yaml\zed-industries_zed\.github\ISSUE_TEMPLATE\02_bug_edit_predictions.yml | 02_bug_edit_predictions.yml | YAML | 1,159 | 0.95 | 0 | 0.030303 | awesome-app | 761 | 2023-12-10T05:59:51.314152 | GPL-3.0 | false | 3b9d73f694cef11350eedca657b8da73 |
name: Bug Report (Git)\ndescription: Zed Git-Related Bugs\ntype: "Bug"\nlabels: ["git"]\ntitle: "Git: <a short description of the Git bug>"\nbody:\n - type: textarea\n attributes:\n label: Summary\n description: Describe the bug with a one line summary, and provide detailed reproduction steps\n value: |\n <!-- Please insert a one line summary of the issue below -->\n SUMMARY_SENTENCE_HERE\n\n ### Description\n <!-- Describe with sufficient detail to reproduce from a clean Zed install. -->\n Steps to trigger the problem:\n 1.\n 2.\n 3.\n\n Actual Behavior:\n Expected Behavior:\n\n validations:\n required: true\n - type: textarea\n id: environment\n attributes:\n label: Zed Version and System Specs\n description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'\n placeholder: |\n Output of "zed: Copy System Specs Into Clipboard"\n validations:\n required: true\n | dataset_sample\yaml\zed-industries_zed\.github\ISSUE_TEMPLATE\03_bug_git.yml | 03_bug_git.yml | YAML | 1,009 | 0.95 | 0 | 0.03125 | vue-tools | 276 | 2023-07-16T04:59:07.832462 | Apache-2.0 | false | 97bc3c458209743a9f3d5efde03f5fac |
name: Bug Report (Other)\ndescription: |\n Something else is broken in Zed (exclude crashing).\ntype: "Bug"\nbody:\n - type: textarea\n attributes:\n label: Summary\n description: Provide a one sentence summary and detailed reproduction steps\n value: |\n <!-- Begin your issue with a one sentence summary -->\n SUMMARY_SENTENCE_HERE\n\n ### Description\n <!-- Describe with sufficient detail to reproduce from a clean Zed install.\n - Any code must be sufficient to reproduce (include context!)\n - Code must as text, not just as a screenshot.\n - Issues with insufficient detail may be summarily closed.\n -->\n\n Steps to reproduce:\n 1.\n 2.\n 3.\n 4.\n\n Expected Behavior:\n Actual Behavior:\n\n <!-- Before Submitting, did you:\n 1. Include settings.json, keymap.json, .editorconfig if relevant?\n 2. Check your Zed.log for relevant errors? (please include!)\n 3. Click Preview to ensure everything looks right?\n 4. Hide videos, large images and logs in ``` inside collapsible blocks:\n\n <details><summary>click to expand</summary>\n\n ```json\n\n ```\n </details>\n -->\n\n validations:\n required: true\n\n - type: textarea\n id: environment\n attributes:\n label: Zed Version and System Specs\n description: |\n Open Zed, from the command palette select "zed: Copy System Specs Into Clipboard"\n placeholder: |\n Output of "zed: Copy System Specs Into Clipboard"\n validations:\n required: true\n | dataset_sample\yaml\zed-industries_zed\.github\ISSUE_TEMPLATE\10_bug_report.yml | 10_bug_report.yml | YAML | 1,614 | 0.95 | 0.035714 | 0.021277 | node-utils | 855 | 2025-03-12T16:42:16.944491 | GPL-3.0 | false | faebce2f2184205daaae9ba3c1e376b3 |
name: Crash Report\ndescription: Zed is Crashing or Hanging\ntype: "Crash"\nbody:\n - type: textarea\n attributes:\n label: Summary\n description: Summarize the issue with detailed reproduction steps\n value: |\n <!-- Begin your issue with a one sentence summary -->\n SUMMARY_SENTENCE_HERE\n\n ### Description\n <!-- Include all steps necessary to reproduce from a clean Zed installation. Be verbose -->\n Steps to trigger the problem:\n 1.\n 2.\n 3.\n\n Actual Behavior:\n Expected Behavior:\n\n validations:\n required: true\n - type: textarea\n id: environment\n attributes:\n label: Zed Version and System Specs\n description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'\n placeholder: |\n Output of "zed: Copy System Specs Into Clipboard"\n validations:\n required: true\n - type: textarea\n attributes:\n label: If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.\n description: |\n macOS: `~/Library/Logs/Zed/Zed.log`\n Linux: `~/.local/share/zed/logs/Zed.log` or $XDG_DATA_HOME\n If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.\n value: |\n <details><summary>Zed.log</summary>\n\n <!-- Paste your log inside the code block. -->\n ```log\n\n ```\n\n </details>\n validations:\n required: false\n | dataset_sample\yaml\zed-industries_zed\.github\ISSUE_TEMPLATE\11_crash_report.yml | 11_crash_report.yml | YAML | 1,504 | 0.95 | 0 | 0.022222 | awesome-app | 140 | 2023-10-16T09:04:03.154615 | GPL-3.0 | false | 06f55bef76b327477ad4f9f082dae0dc |
name: Other [Staff Only]\ndescription: Zed Staff Only\nbody:\n - type: textarea\n attributes:\n label: Summary\n value: |\n <!-- Please insert a one line summary of the issue below -->\n SUMMARY_SENTENCE_HERE\n\n ### Description\n\n IF YOU DO NOT WORK FOR ZED INDUSTRIES DO NOT CREATE ISSUES WITH THIS TEMPLATE.\n THEY WILL BE AUTO-CLOSED AND MAY RESULT IN YOU BEING BANNED FROM THE ZED ISSUE TRACKER.\n\n FEATURE REQUESTS / SUPPORT REQUESTS SHOULD BE OPENED AS DISCUSSIONS:\n https://github.com/zed-industries/zed/discussions/new/choose\n validations:\n required: true\n | dataset_sample\yaml\zed-industries_zed\.github\ISSUE_TEMPLATE\99_other.yml | 99_other.yml | YAML | 625 | 0.95 | 0 | 0.0625 | node-utils | 155 | 2023-08-08T11:18:36.898357 | MIT | false | 9d3a487cf801d36b3a24de8851226da8 |
# yaml-language-server: $schema=https://json.schemastore.org/github-issue-config.json\nblank_issues_enabled: false\ncontact_links:\n - name: Feature Request\n url: https://github.com/zed-industries/zed/discussions/new/choose\n about: To request a feature, open a new Discussion in one of the appropriate Discussion categories\n - name: "Zed Discord"\n url: https://zed.dev/community-links\n about: Real-time discussion and user support\n | dataset_sample\yaml\zed-industries_zed\.github\ISSUE_TEMPLATE\config.yml | config.yml | YAML | 442 | 0.8 | 0 | 0.111111 | vue-tools | 986 | 2024-01-07T14:58:50.903032 | BSD-3-Clause | false | 54b229e8bda06b256d7c9f6db6835bfb |
name: Bump collab-staging Tag\n\non:\n schedule:\n # Fire every day at 16:00 UTC (At the start of the US workday)\n - cron: "0 16 * * *"\n\njobs:\n update-collab-staging-tag:\n if: github.repository_owner == 'zed-industries'\n runs-on: ubuntu-latest\n steps:\n - name: Checkout repository\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n fetch-depth: 0\n\n - name: Update collab-staging tag\n run: |\n git config user.name github-actions\n git config user.email github-actions@github.com\n git tag -f collab-staging\n git push origin collab-staging --force\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\bump_collab_staging.yml | bump_collab_staging.yml | YAML | 660 | 0.8 | 0.043478 | 0.05 | vue-tools | 118 | 2024-09-04T16:05:02.131118 | GPL-3.0 | false | bd2e1327e36a1d22251d645379f25ff9 |
name: bump_patch_version\n\non:\n workflow_dispatch:\n inputs:\n branch:\n description: "Branch name to run on"\n required: true\n\nconcurrency:\n # Allow only one workflow per any non-`main` branch.\n group: ${{ github.workflow }}-${{ inputs.branch }}\n cancel-in-progress: true\n\njobs:\n bump_patch_version:\n if: github.repository_owner == 'zed-industries'\n runs-on:\n - buildjet-16vcpu-ubuntu-2204\n steps:\n - name: Checkout code\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n ref: ${{ github.event.inputs.branch }}\n ssh-key: ${{ secrets.ZED_BOT_DEPLOY_KEY }}\n\n - name: Bump Patch Version\n run: |\n set -eux\n\n channel=$(cat crates/zed/RELEASE_CHANNEL)\n\n tag_suffix=""\n case $channel in\n stable)\n ;;\n preview)\n tag_suffix="-pre"\n ;;\n *)\n echo "this must be run on either of stable|preview release branches" >&2\n exit 1\n ;;\n esac\n which cargo-set-version > /dev/null || cargo install cargo-edit\n output=$(cargo set-version -p zed --bump patch 2>&1 | sed 's/.* //')\n export GIT_COMMITTER_NAME="Zed Bot"\n export GIT_COMMITTER_EMAIL="hi@zed.dev"\n git commit -am "Bump to $output for @$GITHUB_ACTOR" --author "Zed Bot <hi@zed.dev>"\n git tag v${output}${tag_suffix}\n git push origin HEAD v${output}${tag_suffix}\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\bump_patch_version.yml | bump_patch_version.yml | YAML | 1,531 | 0.95 | 0.039216 | 0.044444 | python-kit | 658 | 2024-06-07T10:14:55.025841 | GPL-3.0 | false | f5f3f25d3ed3bb592fc27e957731c5db |
name: CI\n\non:\n push:\n branches:\n - main\n - "v[0-9]+.[0-9]+.x"\n tags:\n - "v*"\n\n pull_request:\n branches:\n - "**"\n\nconcurrency:\n # Allow only one workflow per any non-`main` branch.\n group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}\n cancel-in-progress: true\n\nenv:\n CARGO_TERM_COLOR: always\n CARGO_INCREMENTAL: 0\n RUST_BACKTRACE: 1\n\njobs:\n job_spec:\n name: Decide which jobs to run\n if: github.repository_owner == 'zed-industries'\n outputs:\n run_tests: ${{ steps.filter.outputs.run_tests }}\n run_license: ${{ steps.filter.outputs.run_license }}\n runs-on:\n - ubuntu-latest\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n # 350 is arbitrary; ~10days of history on main (5secs); full history is ~25secs\n fetch-depth: ${{ github.ref == 'refs/heads/main' && 2 || 350 }}\n - name: Fetch git history and generate output filters\n id: filter\n run: |\n if [ -z "$GITHUB_BASE_REF" ]; then\n echo "Not in a PR context (i.e., push to main/stable/preview)"\n COMPARE_REV=$(git rev-parse HEAD~1)\n else\n echo "In a PR context comparing to pull_request.base.ref"\n git fetch origin "$GITHUB_BASE_REF" --depth=350\n COMPARE_REV=$(git merge-base "origin/${GITHUB_BASE_REF}" HEAD)\n fi\n # Specify anything which should skip full CI in this regex:\n # - docs/\n # - .github/ISSUE_TEMPLATE/\n # - .github/workflows/ (except .github/workflows/ci.yml)\n SKIP_REGEX='^(docs/|\.github/(ISSUE_TEMPLATE|workflows/(?!ci)))'\n if [[ $(git diff --name-only $COMPARE_REV ${{ github.sha }} | grep -vP "$SKIP_REGEX") ]]; then\n echo "run_tests=true" >> $GITHUB_OUTPUT\n else\n echo "run_tests=false" >> $GITHUB_OUTPUT\n fi\n if [[ $(git diff --name-only $COMPARE_REV ${{ github.sha }} | grep '^Cargo.lock') ]]; then\n echo "run_license=true" >> $GITHUB_OUTPUT\n else\n echo "run_license=false" >> $GITHUB_OUTPUT\n fi\n\n migration_checks:\n name: Check Postgres and Protobuf migrations, mergability\n needs: [job_spec]\n if: |\n github.repository_owner == 'zed-industries' &&\n needs.job_spec.outputs.run_tests == 'true'\n timeout-minutes: 60\n runs-on:\n - self-hosted\n - test\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n fetch-depth: 0 # fetch full history\n\n - name: Remove untracked files\n run: git clean -df\n\n - name: Find modified migrations\n shell: bash -euxo pipefail {0}\n run: |\n export SQUAWK_GITHUB_TOKEN=${{ github.token }}\n . ./script/squawk\n\n - name: Ensure fresh merge\n shell: bash -euxo pipefail {0}\n run: |\n if [ -z "$GITHUB_BASE_REF" ];\n then\n echo "BUF_BASE_BRANCH=$(git merge-base origin/main HEAD)" >> $GITHUB_ENV\n else\n git checkout -B temp\n git merge -q origin/$GITHUB_BASE_REF -m "merge main into temp"\n echo "BUF_BASE_BRANCH=$GITHUB_BASE_REF" >> $GITHUB_ENV\n fi\n\n - uses: bufbuild/buf-setup-action@v1\n with:\n version: v1.29.0\n - uses: bufbuild/buf-breaking-action@v1\n with:\n input: "crates/proto/proto/"\n against: "https://github.com/${GITHUB_REPOSITORY}.git#branch=${BUF_BASE_BRANCH},subdir=crates/proto/proto/"\n\n workspace_hack:\n timeout-minutes: 60\n name: Check workspace-hack crate\n needs: [job_spec]\n if: |\n github.repository_owner == 'zed-industries' &&\n needs.job_spec.outputs.run_tests == 'true'\n runs-on:\n - buildjet-8vcpu-ubuntu-2204\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n - name: Add Rust to the PATH\n run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH\n - name: Install cargo-hakari\n uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386 # v2\n with:\n command: install\n args: cargo-hakari@0.9.35\n\n - name: Check workspace-hack Cargo.toml is up-to-date\n run: |\n cargo hakari generate --diff || {\n echo "To fix, run script/update-workspace-hack or script/update-workspace-hack.ps1";\n false\n }\n - name: Check all crates depend on workspace-hack\n run: |\n cargo hakari manage-deps --dry-run || {\n echo "To fix, run script/update-workspace-hack or script/update-workspace-hack.ps1"\n false\n }\n\n style:\n timeout-minutes: 60\n name: Check formatting and spelling\n needs: [job_spec]\n if: github.repository_owner == 'zed-industries'\n runs-on:\n - buildjet-8vcpu-ubuntu-2204\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n\n - uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2 # v4.0.0\n with:\n version: 9\n\n - name: Prettier Check on /docs\n working-directory: ./docs\n run: |\n pnpm dlx prettier@${PRETTIER_VERSION} . --check || {\n echo "To fix, run from the root of the zed repo:"\n echo " cd docs && pnpm dlx prettier@${PRETTIER_VERSION} . --write && cd .."\n false\n }\n env:\n PRETTIER_VERSION: 3.5.0\n\n # To support writing comments that they will certainly be revisited.\n - name: Check for todo! and FIXME comments\n run: script/check-todos\n\n - name: Run style checks\n uses: ./.github/actions/check_style\n\n - name: Check for typos\n uses: crate-ci/typos@8e6a4285bcbde632c5d79900a7779746e8b7ea3f # v1.24.6\n with:\n config: ./typos.toml\n\n macos_tests:\n timeout-minutes: 60\n name: (macOS) Run Clippy and tests\n needs: [job_spec]\n if: |\n github.repository_owner == 'zed-industries' &&\n needs.job_spec.outputs.run_tests == 'true'\n runs-on:\n - self-hosted\n - test\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Configure CI\n run: |\n mkdir -p ./../.cargo\n cp ./.cargo/ci-config.toml ./../.cargo/config.toml\n\n - name: cargo clippy\n run: ./script/clippy\n\n - name: Install cargo-machete\n uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386 # v2\n with:\n command: install\n args: cargo-machete@0.7.0\n\n - name: Check unused dependencies\n uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386 # v2\n with:\n command: machete\n\n - name: Check licenses\n run: |\n script/check-licenses\n if [[ "${{ needs.job_spec.outputs.run_license }}" == "true" ]]; then\n script/generate-licenses /tmp/zed_licenses_output\n fi\n\n - name: Check for new vulnerable dependencies\n if: github.event_name == 'pull_request'\n uses: actions/dependency-review-action@67d4f4bd7a9b17a0db54d2a7519187c65e339de8 # v4\n with:\n license-check: false\n\n - name: Run tests\n uses: ./.github/actions/run_tests\n\n - name: Build collab\n run: cargo build -p collab\n\n - name: Build other binaries and features\n run: |\n cargo build --workspace --bins --all-features\n cargo check -p gpui --features "macos-blade"\n cargo check -p workspace\n cargo build -p remote_server\n cargo check -p gpui --examples\n\n # Since the macOS runners are stateful, so we need to remove the config file to prevent potential bug.\n - name: Clean CI config file\n if: always()\n run: rm -rf ./../.cargo\n\n linux_tests:\n timeout-minutes: 60\n name: (Linux) Run Clippy and tests\n needs: [job_spec]\n if: |\n github.repository_owner == 'zed-industries' &&\n needs.job_spec.outputs.run_tests == 'true'\n runs-on:\n - buildjet-16vcpu-ubuntu-2204\n steps:\n - name: Add Rust to the PATH\n run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH\n\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Cache dependencies\n uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2\n with:\n save-if: ${{ github.ref == 'refs/heads/main' }}\n cache-provider: "buildjet"\n\n - name: Install Linux dependencies\n run: ./script/linux\n\n - name: Configure CI\n run: |\n mkdir -p ./../.cargo\n cp ./.cargo/ci-config.toml ./../.cargo/config.toml\n\n - name: cargo clippy\n run: ./script/clippy\n\n - name: Run tests\n uses: ./.github/actions/run_tests\n\n - name: Build other binaries and features\n run: |\n cargo build -p zed\n cargo check -p workspace\n cargo check -p gpui --examples\n\n # Even the Linux runner is not stateful, in theory there is no need to do this cleanup.\n # But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code\n # to clean up the config file, Iโve included the cleanup code here as a precaution.\n # While itโs not strictly necessary at this moment, I believe itโs better to err on the side of caution.\n - name: Clean CI config file\n if: always()\n run: rm -rf ./../.cargo\n\n build_remote_server:\n timeout-minutes: 60\n name: (Linux) Build Remote Server\n needs: [job_spec]\n if: |\n github.repository_owner == 'zed-industries' &&\n needs.job_spec.outputs.run_tests == 'true'\n runs-on:\n - buildjet-8vcpu-ubuntu-2204\n steps:\n - name: Add Rust to the PATH\n run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH\n\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Cache dependencies\n uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2\n with:\n save-if: ${{ github.ref == 'refs/heads/main' }}\n cache-provider: "buildjet"\n\n - name: Install Clang & Mold\n run: ./script/remote-server && ./script/install-mold 2.34.0\n\n - name: Configure CI\n run: |\n mkdir -p ./../.cargo\n cp ./.cargo/ci-config.toml ./../.cargo/config.toml\n\n - name: Build Remote Server\n run: cargo build -p remote_server\n\n - name: Clean CI config file\n if: always()\n run: rm -rf ./../.cargo\n\n windows_clippy:\n timeout-minutes: 60\n name: (Windows) Run Clippy\n needs: [job_spec]\n if: |\n github.repository_owner == 'zed-industries' &&\n needs.job_spec.outputs.run_tests == 'true'\n runs-on: windows-2025-16\n steps:\n # more info here:- https://github.com/rust-lang/cargo/issues/13020\n - name: Enable longer pathnames for git\n run: git config --system core.longpaths true\n\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Create Dev Drive using ReFS\n run: ./script/setup-dev-driver.ps1\n\n # actions/checkout does not let us clone into anywhere outside ${{ github.workspace }}, so we have to copy the clone...\n - name: Copy Git Repo to Dev Drive\n run: |\n Copy-Item -Path "${{ github.workspace }}" -Destination "${{ env.ZED_WORKSPACE }}" -Recurse\n\n - name: Cache dependencies\n uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2\n with:\n save-if: ${{ github.ref == 'refs/heads/main' }}\n workspaces: ${{ env.ZED_WORKSPACE }}\n cache-provider: "github"\n\n - name: Configure CI\n run: |\n mkdir -p ${{ env.CARGO_HOME }} -ErrorAction Ignore\n cp ./.cargo/ci-config.toml ${{ env.CARGO_HOME }}/config.toml\n\n - name: cargo clippy\n working-directory: ${{ env.ZED_WORKSPACE }}\n run: ./script/clippy.ps1\n\n - name: Check dev drive space\n working-directory: ${{ env.ZED_WORKSPACE }}\n # `setup-dev-driver.ps1` creates a 100GB drive, with CI taking up ~45GB of the drive.\n run: ./script/exit-ci-if-dev-drive-is-full.ps1 95\n\n # Since the Windows runners are stateful, so we need to remove the config file to prevent potential bug.\n - name: Clean CI config file\n if: always()\n run: |\n if (Test-Path "${{ env.CARGO_HOME }}/config.toml") {\n Remove-Item -Path "${{ env.CARGO_HOME }}/config.toml" -Force\n }\n\n # Windows CI takes twice as long as our other platforms and fast github hosted runners are expensive.\n # But we still want to do CI, so let's only run tests on main and come back to this when we're\n # ready to self host our Windows CI (e.g. during the push for full Windows support)\n windows_tests:\n timeout-minutes: 60\n name: (Windows) Run Tests\n needs: [job_spec]\n if: |\n github.repository_owner == 'zed-industries' &&\n needs.job_spec.outputs.run_tests == 'true'\n # Use bigger runners for PRs (speed); smaller for async (cost)\n runs-on: ${{ github.event_name == 'pull_request' && 'windows-2025-32' || 'windows-2025-16' }}\n steps:\n # more info here:- https://github.com/rust-lang/cargo/issues/13020\n - name: Enable longer pathnames for git\n run: git config --system core.longpaths true\n\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Create Dev Drive using ReFS\n run: ./script/setup-dev-driver.ps1\n\n # actions/checkout does not let us clone into anywhere outside ${{ github.workspace }}, so we have to copy the clone...\n - name: Copy Git Repo to Dev Drive\n run: |\n Copy-Item -Path "${{ github.workspace }}" -Destination "${{ env.ZED_WORKSPACE }}" -Recurse\n\n - name: Cache dependencies\n uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2\n with:\n save-if: ${{ github.ref == 'refs/heads/main' }}\n workspaces: ${{ env.ZED_WORKSPACE }}\n cache-provider: "github"\n\n - name: Configure CI\n run: |\n mkdir -p ${{ env.CARGO_HOME }} -ErrorAction Ignore\n cp ./.cargo/ci-config.toml ${{ env.CARGO_HOME }}/config.toml\n\n - name: Run tests\n uses: ./.github/actions/run_tests_windows\n with:\n working-directory: ${{ env.ZED_WORKSPACE }}\n\n - name: Build Zed\n working-directory: ${{ env.ZED_WORKSPACE }}\n run: cargo build\n\n - name: Check dev drive space\n working-directory: ${{ env.ZED_WORKSPACE }}\n # `setup-dev-driver.ps1` creates a 100GB drive, with CI taking up ~45GB of the drive.\n run: ./script/exit-ci-if-dev-drive-is-full.ps1 95\n\n # Since the Windows runners are stateful, so we need to remove the config file to prevent potential bug.\n - name: Clean CI config file\n if: always()\n run: |\n if (Test-Path "${{ env.CARGO_HOME }}/config.toml") {\n Remove-Item -Path "${{ env.CARGO_HOME }}/config.toml" -Force\n }\n\n tests_pass:\n name: Tests Pass\n runs-on: ubuntu-latest\n needs:\n - job_spec\n - style\n - migration_checks\n # run_tests: If adding required tests, add them here and to script below.\n - workspace_hack\n - linux_tests\n - build_remote_server\n - macos_tests\n - windows_clippy\n - windows_tests\n if: always()\n steps:\n - name: Check all tests passed\n run: |\n # Check dependent jobs...\n RET_CODE=0\n # Always check style\n [[ "${{ needs.style.result }}" != 'success' ]] && { RET_CODE=1; echo "style tests failed"; }\n\n # Only check test jobs if they were supposed to run\n if [[ "${{ needs.job_spec.outputs.run_tests }}" == "true" ]]; then\n [[ "${{ needs.workspace_hack.result }}" != 'success' ]] && { RET_CODE=1; echo "Workspace Hack failed"; }\n [[ "${{ needs.macos_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "macOS tests failed"; }\n [[ "${{ needs.linux_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Linux tests failed"; }\n [[ "${{ needs.windows_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Windows tests failed"; }\n [[ "${{ needs.windows_clippy.result }}" != 'success' ]] && { RET_CODE=1; echo "Windows clippy failed"; }\n [[ "${{ needs.build_remote_server.result }}" != 'success' ]] && { RET_CODE=1; echo "Remote server build failed"; }\n # This check is intentionally disabled. See: https://github.com/zed-industries/zed/pull/28431\n # [[ "${{ needs.migration_checks.result }}" != 'success' ]] && { RET_CODE=1; echo "Migration Checks failed"; }\n fi\n if [[ "$RET_CODE" -eq 0 ]]; then\n echo "All tests passed successfully!"\n fi\n exit $RET_CODE\n\n bundle-mac:\n timeout-minutes: 120\n name: Create a macOS bundle\n runs-on:\n - self-hosted\n - bundle\n if: |\n startsWith(github.ref, 'refs/tags/v')\n || contains(github.event.pull_request.labels.*.name, 'run-bundling')\n needs: [macos_tests]\n env:\n MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}\n MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}\n APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}\n APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}\n APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}\n ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}\n ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}\n DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}\n DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}\n steps:\n - name: Install Node\n uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4\n with:\n node-version: "18"\n\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n # We need to fetch more than one commit so that `script/draft-release-notes`\n # is able to diff between the current and previous tag.\n #\n # 25 was chosen arbitrarily.\n fetch-depth: 25\n clean: false\n ref: ${{ github.ref }}\n\n - name: Limit target directory size\n run: script/clear-target-dir-if-larger-than 100\n\n - name: Determine version and release channel\n if: ${{ startsWith(github.ref, 'refs/tags/v') }}\n run: |\n # This exports RELEASE_CHANNEL into env (GITHUB_ENV)\n script/determine-release-channel\n\n - name: Draft release notes\n if: ${{ startsWith(github.ref, 'refs/tags/v') }}\n run: |\n mkdir -p target/\n # Ignore any errors that occur while drafting release notes to not fail the build.\n script/draft-release-notes "$RELEASE_VERSION" "$RELEASE_CHANNEL" > target/release-notes.md || true\n script/create-draft-release target/release-notes.md\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Create macOS app bundle\n run: script/bundle-mac\n\n - name: Rename binaries\n if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}\n run: |\n mv target/aarch64-apple-darwin/release/Zed.dmg target/aarch64-apple-darwin/release/Zed-aarch64.dmg\n mv target/x86_64-apple-darwin/release/Zed.dmg target/x86_64-apple-darwin/release/Zed-x86_64.dmg\n\n - name: Upload app bundle (aarch64) to workflow run if main branch or specific label\n uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4\n if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}\n with:\n name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.dmg\n path: target/aarch64-apple-darwin/release/Zed-aarch64.dmg\n\n - name: Upload app bundle (x86_64) to workflow run if main branch or specific label\n uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4\n if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}\n with:\n name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.dmg\n path: target/x86_64-apple-darwin/release/Zed-x86_64.dmg\n\n - uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1\n name: Upload app bundle to release\n if: ${{ env.RELEASE_CHANNEL == 'preview' || env.RELEASE_CHANNEL == 'stable' }}\n with:\n draft: true\n prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}\n files: |\n target/zed-remote-server-macos-x86_64.gz\n target/zed-remote-server-macos-aarch64.gz\n target/aarch64-apple-darwin/release/Zed-aarch64.dmg\n target/x86_64-apple-darwin/release/Zed-x86_64.dmg\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n\n bundle-linux-x86_x64:\n timeout-minutes: 60\n name: Linux x86_x64 release bundle\n runs-on:\n - buildjet-16vcpu-ubuntu-2004 # ubuntu 20.04 for minimal glibc\n if: |\n startsWith(github.ref, 'refs/tags/v')\n || contains(github.event.pull_request.labels.*.name, 'run-bundling')\n needs: [linux_tests]\n env:\n ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}\n ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}\n DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}\n DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Install Linux dependencies\n run: ./script/linux && ./script/install-mold 2.34.0\n\n - name: Determine version and release channel\n if: startsWith(github.ref, 'refs/tags/v')\n run: |\n # This exports RELEASE_CHANNEL into env (GITHUB_ENV)\n script/determine-release-channel\n\n - name: Create Linux .tar.gz bundle\n run: script/bundle-linux\n\n - name: Upload Artifact to Workflow - zed (run-bundling)\n uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4\n if: contains(github.event.pull_request.labels.*.name, 'run-bundling')\n with:\n name: zed-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz\n path: target/release/zed-*.tar.gz\n\n - name: Upload Artifact to Workflow - zed-remote-server (run-bundling)\n uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4\n if: contains(github.event.pull_request.labels.*.name, 'run-bundling')\n with:\n name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.gz\n path: target/zed-remote-server-linux-x86_64.gz\n\n - name: Upload Artifacts to release\n uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1\n if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) }}\n with:\n draft: true\n prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}\n files: |\n target/zed-remote-server-linux-x86_64.gz\n target/release/zed-linux-x86_64.tar.gz\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n\n bundle-linux-aarch64: # this runs on ubuntu22.04\n timeout-minutes: 60\n name: Linux arm64 release bundle\n runs-on:\n - buildjet-16vcpu-ubuntu-2204-arm\n if: |\n startsWith(github.ref, 'refs/tags/v')\n || contains(github.event.pull_request.labels.*.name, 'run-bundling')\n needs: [linux_tests]\n env:\n ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}\n ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}\n DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}\n DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Install Linux dependencies\n run: ./script/linux\n\n - name: Determine version and release channel\n if: startsWith(github.ref, 'refs/tags/v')\n run: |\n # This exports RELEASE_CHANNEL into env (GITHUB_ENV)\n script/determine-release-channel\n\n - name: Create and upload Linux .tar.gz bundles\n run: script/bundle-linux\n\n - name: Upload Artifact to Workflow - zed (run-bundling)\n uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4\n if: contains(github.event.pull_request.labels.*.name, 'run-bundling')\n with:\n name: zed-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz\n path: target/release/zed-*.tar.gz\n\n - name: Upload Artifact to Workflow - zed-remote-server (run-bundling)\n uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4\n if: contains(github.event.pull_request.labels.*.name, 'run-bundling')\n with:\n name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.gz\n path: target/zed-remote-server-linux-aarch64.gz\n\n - name: Upload Artifacts to release\n uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1\n if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) }}\n with:\n draft: true\n prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}\n files: |\n target/zed-remote-server-linux-aarch64.gz\n target/release/zed-linux-aarch64.tar.gz\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n\n nix-build:\n timeout-minutes: 60\n name: Nix Build\n continue-on-error: true\n if: github.repository_owner == 'zed-industries' && contains(github.event.pull_request.labels.*.name, 'run-nix')\n strategy:\n fail-fast: false\n matrix:\n system:\n - os: x86 Linux\n runner: buildjet-16vcpu-ubuntu-2204\n install_nix: true\n - os: arm Mac\n runner: [macOS, ARM64, test]\n install_nix: false\n runs-on: ${{ matrix.system.runner }}\n env:\n ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}\n ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}\n GIT_LFS_SKIP_SMUDGE: 1 # breaks the livekit rust sdk examples which we don't actually depend on\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n - name: Set path\n if: ${{ ! matrix.system.install_nix }}\n run: |\n echo "/nix/var/nix/profiles/default/bin" >> $GITHUB_PATH\n echo "/Users/administrator/.nix-profile/bin" >> $GITHUB_PATH\n\n - uses: cachix/install-nix-action@d1ca217b388ee87b2507a9a93bf01368bde7cec2 # v31\n if: ${{ matrix.system.install_nix }}\n with:\n github_access_token: ${{ secrets.GITHUB_TOKEN }}\n\n - uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad # v16\n with:\n name: zed-industries\n authToken: "${{ secrets.CACHIX_AUTH_TOKEN }}"\n skipPush: true\n - run: nix build .#debug\n - name: Limit /nix/store to 50GB\n run: "[ $(du -sm /nix/store | cut -f1) -gt 50000 ] && nix-collect-garbage -d"\n\n auto-release-preview:\n name: Auto release preview\n if: |\n startsWith(github.ref, 'refs/tags/v')\n && endsWith(github.ref, '-pre') && !endsWith(github.ref, '.0-pre')\n needs: [bundle-mac, bundle-linux-x86_x64, bundle-linux-aarch64]\n runs-on:\n - self-hosted\n - bundle\n steps:\n - name: gh release\n run: gh release edit $GITHUB_REF_NAME --draft=true\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\ci.yml | ci.yml | YAML | 29,189 | 0.95 | 0.087353 | 0.056464 | vue-tools | 18 | 2024-06-03T22:24:53.245100 | Apache-2.0 | false | 8944b2683ff287f6f6c74a911ddfb9df |
name: "Close Stale Issues"\non:\n schedule:\n - cron: "0 7,9,11 * * 3"\n workflow_dispatch:\n\njobs:\n stale:\n if: github.repository_owner == 'zed-industries'\n runs-on: ubuntu-latest\n steps:\n - uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n stale-issue-message: >\n Hi there! ๐\n\n We're working to clean up our issue tracker by closing older issues that might not be relevant anymore. If you are able to reproduce this issue in the latest version of Zed, please let us know by commenting on this issue, and we will keep it open. If you can't reproduce it, feel free to close the issue yourself. Otherwise, we'll close it in 7 days.\n\n Thanks for your help!\n close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please open a new issue with a link to this issue."\n days-before-stale: 120\n days-before-close: 7\n any-of-issue-labels: "bug,panic / crash"\n operations-per-run: 1000\n ascending: true\n enable-statistics: true\n stale-issue-label: "stale"\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\community_close_stale_issues.yml | community_close_stale_issues.yml | YAML | 1,211 | 0.8 | 0.071429 | 0 | python-kit | 255 | 2024-11-08T22:00:24.225678 | GPL-3.0 | false | cfd9184c1dcfa75f6e608d8614187a7d |
name: Delete Mediafire Comments\n\non:\n issue_comment:\n types: [created]\n\npermissions:\n issues: write\n\njobs:\n delete_comment:\n if: github.repository_owner == 'zed-industries'\n runs-on: ubuntu-latest\n steps:\n - name: Check for specific strings in comment\n id: check_comment\n uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7\n with:\n script: |\n const comment = context.payload.comment.body;\n const triggerStrings = ['www.mediafire.com'];\n return triggerStrings.some(triggerString => comment.includes(triggerString));\n\n - name: Delete comment if it contains any of the specific strings\n if: steps.check_comment.outputs.result == 'true'\n uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7\n with:\n script: |\n const commentId = context.payload.comment.id;\n await github.rest.issues.deleteComment({\n owner: context.repo.owner,\n repo: context.repo.repo,\n comment_id: commentId\n });\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\community_delete_comments.yml | community_delete_comments.yml | YAML | 1,109 | 0.8 | 0.117647 | 0 | node-utils | 729 | 2025-05-14T14:50:34.940919 | MIT | false | 426fe40c114c220ce0ba1a67266ee15b |
name: Release Actions\n\non:\n release:\n types: [published]\n\njobs:\n discord_release:\n if: github.repository_owner == 'zed-industries'\n runs-on: ubuntu-latest\n steps:\n - name: Get release URL\n id: get-release-url\n run: |\n if [ "${{ github.event.release.prerelease }}" == "true" ]; then\n URL="https://zed.dev/releases/preview/latest"\n else\n URL="https://zed.dev/releases/stable/latest"\n fi\n\n echo "URL=$URL" >> $GITHUB_OUTPUT\n - name: Get content\n uses: 2428392/gh-truncate-string-action@b3ff790d21cf42af3ca7579146eedb93c8fb0757 # v1.4.1\n id: get-content\n with:\n stringToTruncate: |\n ๐ฃ Zed [${{ github.event.release.tag_name }}](<${{ steps.get-release-url.outputs.URL }}>) was just released!\n\n ${{ github.event.release.body }}\n maxLength: 2000\n truncationSymbol: "..."\n - name: Discord Webhook Action\n uses: tsickert/discord-webhook@c840d45a03a323fbc3f7507ac7769dbd91bfb164 # v5.3.0\n with:\n webhook-url: ${{ secrets.DISCORD_WEBHOOK_URL }}\n content: ${{ steps.get-content.outputs.string }}\n\n send_release_notes_email:\n if: github.repository_owner == 'zed-industries' && !github.event.release.prerelease\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n fetch-depth: 0\n\n - name: Check if release was promoted from preview\n id: check-promotion-from-preview\n run: |\n VERSION="${{ github.event.release.tag_name }}"\n PREVIEW_TAG="${VERSION}-pre"\n\n if git rev-parse "$PREVIEW_TAG" > /dev/null 2>&1; then\n echo "was_promoted_from_preview=true" >> $GITHUB_OUTPUT\n else\n echo "was_promoted_from_preview=false" >> $GITHUB_OUTPUT\n fi\n\n - name: Send release notes email\n if: steps.check-promotion-from-preview.outputs.was_promoted_from_preview == 'true'\n run: |\n TAG="${{ github.event.release.tag_name }}"\n cat << 'EOF' > release_body.txt\n ${{ github.event.release.body }}\n EOF\n jq -n --arg tag "$TAG" --rawfile body release_body.txt '{version: $tag, markdown_body: $body}' \\n > release_data.json\n curl -X POST "https://zed.dev/api/send_release_notes_email" \\n -H "Authorization: Bearer ${{ secrets.RELEASE_NOTES_API_TOKEN }}" \\n -H "Content-Type: application/json" \\n -d @release_data.json\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\community_release_actions.yml | community_release_actions.yml | YAML | 2,583 | 0.8 | 0.085714 | 0 | react-lib | 28 | 2025-04-11T07:41:44.617721 | BSD-3-Clause | false | f0fb2ae8cb1641baee8370989074a641 |
name: Update All Top Ranking Issues\n\non:\n schedule:\n - cron: "0 */12 * * *"\n workflow_dispatch:\n\njobs:\n update_top_ranking_issues:\n runs-on: ubuntu-latest\n if: github.repository == 'zed-industries/zed'\n steps:\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n - name: Set up uv\n uses: astral-sh/setup-uv@caf0cab7a618c569241d31dcd442f54681755d39 # v3\n with:\n version: "latest"\n enable-cache: true\n cache-dependency-glob: "script/update_top_ranking_issues/pyproject.toml"\n - name: Install Python 3.13\n run: uv python install 3.13\n - name: Install dependencies\n run: uv sync --project script/update_top_ranking_issues -p 3.13\n - name: Run script\n run: uv run --project script/update_top_ranking_issues script/update_top_ranking_issues/main.py --github-token ${{ secrets.GITHUB_TOKEN }} --issue-reference-number 5393\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\community_update_all_top_ranking_issues.yml | community_update_all_top_ranking_issues.yml | YAML | 938 | 0.8 | 0.04 | 0 | react-lib | 727 | 2025-03-17T02:50:10.747990 | BSD-3-Clause | false | 5d59b1e4f5cdbbf6605368ba66c8f2af |
name: Update Weekly Top Ranking Issues\n\non:\n schedule:\n - cron: "0 15 * * *"\n workflow_dispatch:\n\njobs:\n update_top_ranking_issues:\n runs-on: ubuntu-latest\n if: github.repository == 'zed-industries/zed'\n steps:\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n - name: Set up uv\n uses: astral-sh/setup-uv@caf0cab7a618c569241d31dcd442f54681755d39 # v3\n with:\n version: "latest"\n enable-cache: true\n cache-dependency-glob: "script/update_top_ranking_issues/pyproject.toml"\n - name: Install Python 3.13\n run: uv python install 3.13\n - name: Install dependencies\n run: uv sync --project script/update_top_ranking_issues -p 3.13\n - name: Run script\n run: uv run --project script/update_top_ranking_issues script/update_top_ranking_issues/main.py --github-token ${{ secrets.GITHUB_TOKEN }} --issue-reference-number 6952 --query-day-interval 7\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\community_update_weekly_top_ranking_issues.yml | community_update_weekly_top_ranking_issues.yml | YAML | 962 | 0.8 | 0.04 | 0 | awesome-app | 337 | 2024-07-02T19:16:07.729902 | BSD-3-Clause | false | 5c63ebde7a35abedb78d8aad70944c47 |
name: Danger\n\non:\n pull_request:\n branches: [main]\n types:\n - opened\n - synchronize\n - reopened\n - edited\n\njobs:\n danger:\n if: github.repository_owner == 'zed-industries'\n runs-on: ubuntu-latest\n\n steps:\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n\n - uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2 # v4.0.0\n with:\n version: 9\n\n - name: Setup Node\n uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4\n with:\n node-version: "20"\n cache: "pnpm"\n cache-dependency-path: "script/danger/pnpm-lock.yaml"\n\n - run: pnpm install --dir script/danger\n\n - name: Run Danger\n run: pnpm run --dir script/danger danger ci\n env:\n # This GitHub token is not used, but the value needs to be here to prevent\n # Danger from throwing an error.\n GITHUB_TOKEN: "not_a_real_token"\n # All requests are instead proxied through an instance of\n # https://github.com/maxdeviant/danger-proxy that allows Danger to securely\n # authenticate with GitHub while still being able to run on PRs from forks.\n DANGER_GITHUB_API_BASE_URL: "https://danger-proxy.fly.dev/github"\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\danger.yml | danger.yml | YAML | 1,297 | 0.8 | 0.047619 | 0.142857 | python-kit | 160 | 2024-02-29T05:26:32.312393 | Apache-2.0 | false | 6bc525ff15632bba1d2c65bbd116c493 |
name: Deploy Docs\n\non:\n push:\n branches:\n - main\n\njobs:\n deploy-docs:\n name: Deploy Docs\n if: github.repository_owner == 'zed-industries'\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Setup mdBook\n uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2\n with:\n mdbook-version: "0.4.37"\n\n - name: Set up default .cargo/config.toml\n run: cp ./.cargo/collab-config.toml ./.cargo/config.toml\n\n - name: Install system dependencies\n run: |\n sudo apt-get update\n sudo apt-get install libxkbcommon-dev libxkbcommon-x11-dev\n\n - name: Build book\n run: |\n set -euo pipefail\n mkdir -p target/deploy\n mdbook build ./docs --dest-dir=../target/deploy/docs/\n\n - name: Deploy Docs\n uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3\n with:\n apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}\n accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}\n command: pages deploy target/deploy --project-name=docs\n\n - name: Deploy Install\n uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3\n with:\n apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}\n accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}\n command: r2 object put -f script/install.sh zed-open-source-website-assets/install.sh\n\n - name: Deploy Docs Workers\n uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3\n with:\n apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}\n accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}\n command: deploy .cloudflare/docs-proxy/src/worker.js\n\n - name: Deploy Install Workers\n uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3\n with:\n apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}\n accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}\n command: deploy .cloudflare/docs-proxy/src/worker.js\n\n - name: Preserve Wrangler logs\n uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4\n if: always()\n with:\n name: wrangler_logs\n path: /home/runner/.config/.wrangler/logs/\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\deploy_cloudflare.yml | deploy_cloudflare.yml | YAML | 2,452 | 0.8 | 0.027778 | 0 | react-lib | 299 | 2024-11-18T12:38:15.730861 | Apache-2.0 | false | 9b7c702246ac66ca4443a7a1f63e1612 |
name: Publish Collab Server Image\n\non:\n push:\n tags:\n - collab-production\n - collab-staging\n\nenv:\n DOCKER_BUILDKIT: 1\n\njobs:\n style:\n name: Check formatting and Clippy lints\n if: github.repository_owner == 'zed-industries'\n runs-on:\n - self-hosted\n - test\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n fetch-depth: 0\n\n - name: Run style checks\n uses: ./.github/actions/check_style\n\n - name: Run clippy\n run: ./script/clippy\n\n tests:\n name: Run tests\n runs-on:\n - self-hosted\n - test\n needs: style\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n fetch-depth: 0\n\n - name: Install cargo nextest\n shell: bash -euxo pipefail {0}\n run: |\n cargo install cargo-nextest --locked\n\n - name: Limit target directory size\n shell: bash -euxo pipefail {0}\n run: script/clear-target-dir-if-larger-than 100\n\n - name: Run tests\n shell: bash -euxo pipefail {0}\n run: cargo nextest run --package collab --no-fail-fast\n\n publish:\n name: Publish collab server image\n needs:\n - style\n - tests\n runs-on:\n - buildjet-16vcpu-ubuntu-2204\n steps:\n - name: Install doctl\n uses: digitalocean/action-doctl@v2\n with:\n token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}\n\n - name: Sign into DigitalOcean docker registry\n run: doctl registry login\n\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Build docker image\n run: |\n docker build -f Dockerfile-collab \\n --build-arg GITHUB_SHA=$GITHUB_SHA \\n --tag registry.digitalocean.com/zed/collab:$GITHUB_SHA \\n .\n\n - name: Publish docker image\n run: docker push registry.digitalocean.com/zed/collab:${GITHUB_SHA}\n\n - name: Prune Docker system\n run: docker system prune --filter 'until=72h' -f\n\n deploy:\n name: Deploy new server image\n needs:\n - publish\n runs-on:\n - buildjet-16vcpu-ubuntu-2204\n\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Install doctl\n uses: digitalocean/action-doctl@v2\n with:\n token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}\n\n - name: Sign into Kubernetes\n run: doctl kubernetes cluster kubeconfig save --expiry-seconds 600 ${{ secrets.CLUSTER_NAME }}\n\n - name: Start rollout\n run: |\n set -eu\n if [[ $GITHUB_REF_NAME = "collab-production" ]]; then\n export ZED_KUBE_NAMESPACE=production\n export ZED_COLLAB_LOAD_BALANCER_SIZE_UNIT=10\n export ZED_API_LOAD_BALANCER_SIZE_UNIT=2\n elif [[ $GITHUB_REF_NAME = "collab-staging" ]]; then\n export ZED_KUBE_NAMESPACE=staging\n export ZED_COLLAB_LOAD_BALANCER_SIZE_UNIT=1\n export ZED_API_LOAD_BALANCER_SIZE_UNIT=1\n else\n echo "cowardly refusing to deploy from an unknown branch"\n exit 1\n fi\n\n echo "Deploying collab:$GITHUB_SHA to $ZED_KUBE_NAMESPACE"\n\n source script/lib/deploy-helpers.sh\n export_vars_for_environment $ZED_KUBE_NAMESPACE\n\n export ZED_DO_CERTIFICATE_ID=$(doctl compute certificate list --format ID --no-header)\n export ZED_IMAGE_ID="registry.digitalocean.com/zed/collab:${GITHUB_SHA}"\n\n export ZED_SERVICE_NAME=collab\n export ZED_LOAD_BALANCER_SIZE_UNIT=$ZED_COLLAB_LOAD_BALANCER_SIZE_UNIT\n envsubst < crates/collab/k8s/collab.template.yml | kubectl apply -f -\n kubectl -n "$ZED_KUBE_NAMESPACE" rollout status deployment/$ZED_SERVICE_NAME --watch\n echo "deployed ${ZED_SERVICE_NAME} to ${ZED_KUBE_NAMESPACE}"\n\n export ZED_SERVICE_NAME=api\n export ZED_LOAD_BALANCER_SIZE_UNIT=$ZED_API_LOAD_BALANCER_SIZE_UNIT\n envsubst < crates/collab/k8s/collab.template.yml | kubectl apply -f -\n kubectl -n "$ZED_KUBE_NAMESPACE" rollout status deployment/$ZED_SERVICE_NAME --watch\n echo "deployed ${ZED_SERVICE_NAME} to ${ZED_KUBE_NAMESPACE}"\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\deploy_collab.yml | deploy_collab.yml | YAML | 4,495 | 0.8 | 0.020408 | 0 | vue-tools | 844 | 2025-05-26T12:04:31.866442 | GPL-3.0 | false | 5b8d91b78373103bb7e7662e2317b6a9 |
name: Run Agent Eval\n\non:\n schedule:\n - cron: "0 * * * *"\n\n pull_request:\n branches:\n - "**"\n types: [opened, synchronize, reopened, labeled]\n\n workflow_dispatch:\n\nconcurrency:\n # Allow only one workflow per any non-`main` branch.\n group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}\n cancel-in-progress: true\n\nenv:\n CARGO_TERM_COLOR: always\n CARGO_INCREMENTAL: 0\n RUST_BACKTRACE: 1\n ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}\n ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}\n ZED_EVAL_TELEMETRY: 1\n\njobs:\n run_eval:\n timeout-minutes: 60\n name: Run Agent Eval\n if: >\n github.repository_owner == 'zed-industries' &&\n (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'run-eval'))\n runs-on:\n - buildjet-16vcpu-ubuntu-2204\n steps:\n - name: Add Rust to the PATH\n run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH\n\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Cache dependencies\n uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2\n with:\n save-if: ${{ github.ref == 'refs/heads/main' }}\n cache-provider: "buildjet"\n\n - name: Install Linux dependencies\n run: ./script/linux\n\n - name: Configure CI\n run: |\n mkdir -p ./../.cargo\n cp ./.cargo/ci-config.toml ./../.cargo/config.toml\n\n - name: Compile eval\n run: cargo build --package=eval\n\n - name: Run eval\n run: cargo run --package=eval -- --repetitions=3 --concurrency=1\n\n # Even the Linux runner is not stateful, in theory there is no need to do this cleanup.\n # But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code\n # to clean up the config file, Iโve included the cleanup code here as a precaution.\n # While itโs not strictly necessary at this moment, I believe itโs better to err on the side of caution.\n - name: Clean CI config file\n if: always()\n run: rm -rf ./../.cargo\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\eval.yml | eval.yml | YAML | 2,246 | 0.8 | 0.056338 | 0.086207 | python-kit | 556 | 2023-12-25T13:19:27.249428 | MIT | false | 2dbfb7b1533ffa799ad03e1e1f4fb4b0 |
name: Issue Response\n\non:\n schedule:\n - cron: "0 12 * * 2"\n workflow_dispatch:\n\njobs:\n issue-response:\n if: github.repository_owner == 'zed-industries'\n runs-on: ubuntu-latest\n\n steps:\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n\n - uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2 # v4.0.0\n with:\n version: 9\n\n - name: Setup Node\n uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4\n with:\n node-version: "20"\n cache: "pnpm"\n cache-dependency-path: "script/issue_response/pnpm-lock.yaml"\n\n - run: pnpm install --dir script/issue_response\n\n - name: Run Issue Response\n run: pnpm run --dir script/issue_response start\n env:\n ISSUE_RESPONSE_GITHUB_TOKEN: ${{ secrets.ISSUE_RESPONSE_GITHUB_TOKEN }}\n SLACK_ISSUE_RESPONSE_WEBHOOK_URL: ${{ secrets.SLACK_ISSUE_RESPONSE_WEBHOOK_URL }}\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\issue_response.yml | issue_response.yml | YAML | 971 | 0.8 | 0.030303 | 0 | python-kit | 279 | 2024-03-31T04:23:03.697562 | Apache-2.0 | false | 463511d445371f57fa502c0fd757cb06 |
name: Publish zed-extension CLI\n\non:\n push:\n tags:\n - extension-cli\n\nenv:\n CARGO_TERM_COLOR: always\n CARGO_INCREMENTAL: 0\n\njobs:\n publish:\n name: Publish zed-extension CLI\n if: github.repository_owner == 'zed-industries'\n runs-on:\n - ubuntu-latest\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Cache dependencies\n uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2\n with:\n save-if: ${{ github.ref == 'refs/heads/main' }}\n cache-provider: "github"\n\n - name: Configure linux\n shell: bash -euxo pipefail {0}\n run: script/linux\n\n - name: Build extension CLI\n run: cargo build --release --package extension_cli\n\n - name: Upload binary\n env:\n DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}\n DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}\n run: script/upload-extension-cli ${{ github.sha }}\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\publish_extension_cli.yml | publish_extension_cli.yml | YAML | 1,115 | 0.8 | 0.04878 | 0 | python-kit | 504 | 2024-10-03T23:23:34.629415 | GPL-3.0 | false | eef55adaf20ec83235659a781c2c90e4 |
name: Release Nightly\n\non:\n schedule:\n # Fire every day at 7:00am UTC (Roughly before EU workday and after US workday)\n - cron: "0 7 * * *"\n push:\n tags:\n - "nightly"\n\nenv:\n CARGO_TERM_COLOR: always\n CARGO_INCREMENTAL: 0\n RUST_BACKTRACE: 1\n\njobs:\n style:\n timeout-minutes: 60\n name: Check formatting and Clippy lints\n if: github.repository_owner == 'zed-industries'\n runs-on:\n - self-hosted\n - test\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n fetch-depth: 0\n\n - name: Run style checks\n uses: ./.github/actions/check_style\n\n - name: Run clippy\n run: ./script/clippy\n\n tests:\n timeout-minutes: 60\n name: Run tests\n if: github.repository_owner == 'zed-industries'\n runs-on:\n - self-hosted\n - test\n needs: style\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Run tests\n uses: ./.github/actions/run_tests\n\n bundle-mac:\n timeout-minutes: 60\n name: Create a macOS bundle\n if: github.repository_owner == 'zed-industries'\n runs-on:\n - self-hosted\n - bundle\n needs: tests\n env:\n MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}\n MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}\n APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}\n APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}\n APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}\n DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}\n DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}\n ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}\n ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}\n steps:\n - name: Install Node\n uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4\n with:\n node-version: "18"\n\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Set release channel to nightly\n run: |\n set -eu\n version=$(git rev-parse --short HEAD)\n echo "Publishing version: ${version} on release channel nightly"\n echo "nightly" > crates/zed/RELEASE_CHANNEL\n\n - name: Create macOS app bundle\n run: script/bundle-mac\n\n - name: Upload Zed Nightly\n run: script/upload-nightly macos\n\n bundle-linux-x86:\n timeout-minutes: 60\n name: Create a Linux *.tar.gz bundle for x86\n if: github.repository_owner == 'zed-industries'\n runs-on:\n - buildjet-16vcpu-ubuntu-2004\n needs: tests\n env:\n DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}\n DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}\n ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}\n ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Add Rust to the PATH\n run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH\n\n - name: Install Linux dependencies\n run: ./script/linux && ./script/install-mold 2.34.0\n\n - name: Limit target directory size\n run: script/clear-target-dir-if-larger-than 100\n\n - name: Set release channel to nightly\n run: |\n set -euo pipefail\n version=$(git rev-parse --short HEAD)\n echo "Publishing version: ${version} on release channel nightly"\n echo "nightly" > crates/zed/RELEASE_CHANNEL\n\n - name: Create Linux .tar.gz bundle\n run: script/bundle-linux\n\n - name: Upload Zed Nightly\n run: script/upload-nightly linux-targz\n\n bundle-linux-arm:\n timeout-minutes: 60\n name: Create a Linux *.tar.gz bundle for ARM\n if: github.repository_owner == 'zed-industries'\n runs-on:\n - buildjet-16vcpu-ubuntu-2204-arm\n needs: tests\n env:\n DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}\n DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}\n ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}\n ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n - name: Install Linux dependencies\n run: ./script/linux\n\n - name: Limit target directory size\n run: script/clear-target-dir-if-larger-than 100\n\n - name: Set release channel to nightly\n run: |\n set -euo pipefail\n version=$(git rev-parse --short HEAD)\n echo "Publishing version: ${version} on release channel nightly"\n echo "nightly" > crates/zed/RELEASE_CHANNEL\n\n - name: Create Linux .tar.gz bundle\n run: script/bundle-linux\n\n - name: Upload Zed Nightly\n run: script/upload-nightly linux-targz\n\n bundle-nix:\n timeout-minutes: 60\n name: (${{ matrix.system.os }}) Nix Build\n continue-on-error: true\n strategy:\n fail-fast: false\n matrix:\n system:\n - os: x86 Linux\n runner: buildjet-16vcpu-ubuntu-2204\n install_nix: true\n - os: arm Mac\n runner: [macOS, ARM64, test]\n install_nix: false\n if: github.repository_owner == 'zed-industries'\n runs-on: ${{ matrix.system.runner }}\n needs: tests\n env:\n ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}\n ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}\n GIT_LFS_SKIP_SMUDGE: 1 # breaks the livekit rust sdk examples which we don't actually depend on\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n clean: false\n\n # on our macs we manually install nix. for some reason the cachix action is running\n # under a non-login /bin/bash shell which doesn't source the proper script to add the\n # nix profile to PATH, so we manually add them here\n - name: Set path\n if: ${{ ! matrix.system.install_nix }}\n run: |\n echo "/nix/var/nix/profiles/default/bin" >> $GITHUB_PATH\n echo "/Users/administrator/.nix-profile/bin" >> $GITHUB_PATH\n\n - uses: cachix/install-nix-action@d1ca217b388ee87b2507a9a93bf01368bde7cec2 # v31\n if: ${{ matrix.system.install_nix }}\n with:\n github_access_token: ${{ secrets.GITHUB_TOKEN }}\n\n - uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad # v16\n with:\n name: zed-industries\n authToken: "${{ secrets.CACHIX_AUTH_TOKEN }}"\n - run: nix build\n - name: Limit /nix/store to 50GB\n run: '[ $(du -sm /nix/store | cut -f1) -gt 50000 ] && nix-collect-garbage -d'\n\n update-nightly-tag:\n name: Update nightly tag\n if: github.repository_owner == 'zed-industries'\n runs-on: ubuntu-latest\n needs:\n - bundle-mac\n - bundle-linux-x86\n - bundle-linux-arm\n steps:\n - name: Checkout repo\n uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n with:\n fetch-depth: 0\n\n - name: Update nightly tag\n run: |\n if [ "$(git rev-parse nightly)" = "$(git rev-parse HEAD)" ]; then\n echo "Nightly tag already points to current commit. Skipping tagging."\n exit 0\n fi\n git config user.name github-actions\n git config user.email github-actions@github.com\n git tag -f nightly\n git push origin nightly --force\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\release_nightly.yml | release_nightly.yml | YAML | 8,206 | 0.8 | 0.061224 | 0.018692 | awesome-app | 682 | 2023-11-13T12:26:26.178551 | MIT | false | 4fed49b5a65e224530f6e0cee0680bbe |
name: Script\n\non:\n pull_request:\n paths:\n - "script/**"\n push:\n branches:\n - main\n\njobs:\n shellcheck:\n name: "ShellCheck Scripts"\n if: github.repository_owner == 'zed-industries'\n runs-on: ubuntu-latest\n\n steps:\n - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4\n - name: Shellcheck ./scripts\n run: |\n ./script/shellcheck-scripts error\n | dataset_sample\yaml\zed-industries_zed\.github\workflows\script_checks.yml | script_checks.yml | YAML | 414 | 0.8 | 0.047619 | 0 | python-kit | 727 | 2024-10-27T04:47:12.911547 | Apache-2.0 | false | 4dcdda46b306ff8e1df382a1de07c7e0 |
---\nversion: "2.1"\nservices:\n zellij-e2e:\n image: ghcr.io/linuxserver/openssh-server\n container_name: zellij-e2e\n hostname: zellij-e2e\n environment:\n PUID: 1000\n PGID: 1000\n TZ: Europe/Vienna\n PASSWORD_ACCESS: true\n USER_PASSWORD: test\n USER_NAME: test\n volumes:\n - type: bind\n source: ./target\n target: /usr/src/zellij\n - type: bind\n source: ./src/tests/fixtures\n target: /usr/src/zellij/fixtures\n ports:\n - 2222:2222\n restart: unless-stopped\n | dataset_sample\yaml\zellij-org_zellij\docker-compose.yml | docker-compose.yml | YAML | 539 | 0.7 | 0 | 0 | react-lib | 685 | 2025-02-28T23:39:50.696163 | Apache-2.0 | false | e8f0e92076e1e09d25d5cb02318ddf29 |
version: 2\nupdates:\n - package-ecosystem: "github-actions"\n directory: "/"\n schedule:\n interval: "monthly"\n\n # We want to be more sure in our e2e-test, before committing to\n # update all of our dependencies.\n # Only add packages very sporadically here for now.\n - package-ecosystem: "cargo"\n directory: "/"\n schedule:\n interval: "monthly"\n allow:\n # Allow only direct updates\n - dependency-name: "log"\n - package-ecosystem: "cargo"\n directory: "/zellij-utils/"\n schedule:\n interval: "monthly"\n allow:\n # Allow only direct updates\n - dependency-name: "log"\n - dependency-name: "log4rs"\n - dependency-name: "clap"\n - dependency-name: "clap_complete"\n\n\n | dataset_sample\yaml\zellij-org_zellij\.github\dependabot.yml | dependabot.yml | YAML | 730 | 0.8 | 0.034483 | 0.192308 | react-lib | 181 | 2025-02-20T22:51:42.411382 | MIT | false | 355a54d5f2fcbd9e882c8939fbaf38cc |
# These are supported funding model platforms\n\ngithub: [imsnif]\npatreon: # Replace with a single Patreon username\nopen_collective: # Replace with a single Open Collective username\nko_fi: imsnif\ntidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel\ncommunity_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry\nliberapay: imsnif\nissuehunt: # Replace with a single IssueHunt username\notechie: # Replace with a single Otechie username\nlfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry\ncustom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']\n | dataset_sample\yaml\zellij-org_zellij\.github\FUNDING.yml | FUNDING.yml | YAML | 666 | 0.8 | 0 | 0.083333 | vue-tools | 153 | 2024-07-20T18:28:38.367445 | BSD-3-Clause | false | 4a49f13e73bfb66c38cf98d7483a4a73 |
name: End to End tests\n\non:\n push:\n branches: [ main ]\n pull_request:\n branches: [ main ]\n\nenv:\n CARGO_TERM_COLOR: always\n\njobs:\n test-e2e:\n name: Build generic binary and run tests on it\n runs-on: ubuntu-latest\n environment: cachix\n\n services:\n ssh:\n image: ghcr.io/linuxserver/openssh-server\n env:\n PUID: 1001\n PGID: 1000\n TZ: Europe/Vienna\n PASSWORD_ACCESS: true\n USER_PASSWORD: test\n USER_NAME: test\n ports:\n - 2222:2222\n options: -v ${{ github.workspace }}/target:/usr/src/zellij --name ssh\n steps:\n - uses: actions/checkout@v3\n\n - name: Install Protoc\n uses: arduino/setup-protoc@v2\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Install musl-tools\n run: sudo apt-get install -y --no-install-recommends musl-tools\n\n - name: Install Rust\n uses: actions-rust-lang/setup-rust-toolchain@v1\n with:\n rustflags: ""\n\n - name: Create target folder\n run: mkdir -p ${{ github.workspace }}/target\n # we copy this manually into the target folder instead of mounting it because\n # github actions creates the service first, and if it has a mount that is part\n # of your yet unchecked out code, you cannot checkout the code after the mount\n\n - name: Copy fixtures folder to target\n run: cp -r ${{ github.workspace }}/src/tests/fixtures ${{ github.workspace }}/target\n\n - name: Restart ssh container\n # we need to do this because otherwise the volume will not be mounted\n # on the docker container, since it was created before the folder existed\n uses: docker://docker\n with:\n args: docker restart ssh\n - name: Test\n run: cargo xtask ci e2e --test\n | dataset_sample\yaml\zellij-org_zellij\.github\workflows\e2e.yml | e2e.yml | YAML | 1,789 | 0.8 | 0.015873 | 0.09434 | node-utils | 486 | 2025-02-24T22:14:21.405063 | BSD-3-Clause | false | dcdd723655710acb85783539c4f61c66 |
name: Release\non:\n push:\n tags:\n - 'v*.*.*'\n workflow_dispatch:\n\njobs:\n build-release:\n needs: create-release\n name: build-release\n runs-on: ${{ matrix.os }}\n env:\n RUST_BACKTRACE: 1\n strategy:\n matrix:\n build:\n - linux musl x64\n - linux musl aarch64\n - macos x64\n - macos aarch64\n include:\n - build: linux musl x64\n os: ubuntu-latest\n rust: stable\n target: x86_64-unknown-linux-musl\n - build: linux musl aarch64\n os: ubuntu-latest\n rust: stable\n target: aarch64-unknown-linux-musl\n - build: macos x64\n os: macos-latest\n rust: stable\n target: x86_64-apple-darwin\n - build: macos aarch64\n os: macos-latest\n rust: stable\n target: aarch64-apple-darwin\n steps:\n - name: Set release tag\n run: |\n if [ "$GITHUB_EVENT_NAME" == 'workflow_dispatch' ]; then\n echo "RELEASE_TAG=main" >> "$GITHUB_ENV"\n else\n echo "RELEASE_TAG=${GITHUB_REF#refs/tags/}" >> "$GITHUB_ENV"\n fi\n\n - name: Checkout repository\n uses: actions/checkout@v3\n\n - name: Install Protoc\n uses: arduino/setup-protoc@v2\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Install musl-tools\n if: matrix.os == 'ubuntu-latest'\n run: sudo apt-get install -y --no-install-recommends musl-tools\n\n - name: Install Rust\n uses: actions-rust-lang/setup-rust-toolchain@v1\n with:\n toolchain: ${{ matrix.rust }}\n target: "${{ matrix.target }},wasm32-wasip1"\n # Just to make sure the cache doesn't interfere with the build here\n cache: false\n rustflags: ""\n\n - name: Build release binary\n run: cargo xtask ci cross ${{ matrix.target }}\n\n # this breaks on aarch64 and this if conditional isn't working for some reason: TODO: investigate\n #- name: Strip release binary\n # if: runner.target != 'aarch64-unknown-linux-musl' && runner.target != 'aarch64-apple-darwin'\n # run: strip "target/${{ matrix.target }}/release/zellij"\n\n - name: Create checksum\n id: make-checksum\n working-directory: ./target/${{ matrix.target }}/release\n run: |\n name="zellij-${{ matrix.target }}.sha256sum"\n if [[ "$RUNNER_OS" != "macOS" ]]; then\n sha256sum "zellij" > "${name}"\n else\n shasum -a 256 "zellij" > "${name}"\n fi\n echo "::set-output name=name::${name}"\n\n - name: Tar release\n id: make-artifact\n working-directory: ./target/${{ matrix.target }}/release\n run: |\n name="zellij-${{ matrix.target }}.tar.gz"\n tar cvzf "${name}" "zellij"\n echo "::set-output name=name::${name}"\n\n - name: Upload release archive\n uses: actions/upload-release-asset@v1.0.2\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n with:\n upload_url: ${{ needs.create-release.outputs.upload_url }}\n asset_path: ./target/${{ matrix.target }}/release/${{ steps.make-artifact.outputs.name }}\n asset_name: zellij-${{matrix.target}}.tar.gz\n asset_content_type: application/octet-stream\n\n - name: Upload checksum\n uses: actions/upload-release-asset@v1.0.2\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n with:\n upload_url: ${{ needs.create-release.outputs.upload_url }}\n asset_path: ./target/${{ matrix.target }}/release/${{ steps.make-checksum.outputs.name }}\n asset_name: zellij-${{matrix.target}}.sha256sum\n asset_content_type: text/plain\n\n create-release:\n runs-on: ubuntu-latest\n outputs:\n upload_url: ${{ steps.create_release.outputs.upload_url }}\n steps:\n - name: create_release\n id: create_release\n uses: actions/create-release@v1\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n with:\n tag_name: ${{ github.event_name == 'workflow_dispatch' && '' || github.ref }}\n release_name: Release ${{ github.event_name == 'workflow_dispatch' && 'main' || github.ref }}\n draft: ${{ github.event_name == 'workflow_dispatch' }}\n prerelease: false\n\n | dataset_sample\yaml\zellij-org_zellij\.github\workflows\release.yml | release.yml | YAML | 4,396 | 0.8 | 0.045455 | 0.042017 | vue-tools | 196 | 2025-01-30T01:02:50.747302 | MIT | false | 677e3d5a10cce54c0c163beff7bb4755 |
name: Rust\n\non:\n push:\n branches: [main]\n pull_request:\n branches: [main]\n\nenv:\n CARGO_TERM_COLOR: always\n\njobs:\n build:\n name: Build\n\n strategy:\n fail-fast: false\n matrix:\n os: [ubuntu-latest, macos-latest]\n\n runs-on: ${{ matrix.os }}\n\n steps:\n - uses: actions/checkout@v3\n\n - name: Install Protoc\n uses: arduino/setup-protoc@v2\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Setup toolchain\n uses: actions-rust-lang/setup-rust-toolchain@v1\n with:\n rustflags: ""\n\n - name: Build\n run: cargo xtask build\n\n test:\n name: Test\n\n strategy:\n fail-fast: false\n matrix:\n os: [ubuntu-latest, macos-latest]\n\n runs-on: ${{ matrix.os }}\n\n steps:\n - uses: actions/checkout@v3\n\n - name: Install Protoc\n uses: arduino/setup-protoc@v2\n with:\n repo-token: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Setup toolchain\n uses: actions-rust-lang/setup-rust-toolchain@v1\n with:\n rustflags: ""\n\n - name: Test\n run: cargo xtask test\n\n format:\n name: Check Formatting\n runs-on: ubuntu-latest\n\n steps:\n - uses: actions/checkout@v3\n\n - name: Setup toolchain\n uses: actions-rust-lang/setup-rust-toolchain@v1\n with:\n rustflags: ""\n\n - name: Check Format\n run: cargo xtask format --check\n | dataset_sample\yaml\zellij-org_zellij\.github\workflows\rust.yml | rust.yml | YAML | 1,433 | 0.7 | 0 | 0 | react-lib | 582 | 2023-09-03T15:57:23.723351 | MIT | false | 2f8a205202ac94f89df246d4d798fc01 |
task:\n name: freebsd-build\n freebsd_instance:\n matrix:\n - image_family: freebsd-15-0\n - image_family: freebsd-14-2\n - image_family: freebsd-13-5\n\n prepare_script:\n - pkg install -yq git cmake pkgconf jpeg-turbo mysql80-client ffmpeg libvncserver libjwt catch2 p5-DBI p5-DBD-mysql p5-Date-Manip p5-Test-LWP-UserAgent p5-Sys-Mmap v4l_compat\n\n configure_script:\n - git submodule update --init --recursive\n - mkdir build\n - cd build\n - cmake --version\n - cmake ../ -DBUILD_MAN=0 -DBUILD_TEST_SUITE=1 -DENABLE_WERROR=1 -DCMAKE_C_FLAGS="-Wno-deprecated-declarations" -DCMAKE_CXX_FLAGS="-Wno-deprecated-declarations"\n\n build_script:\n - cd build\n - make -j3\n\n install_script:\n - cd build\n - make install\n\n test_script:\n - cd build/tests\n - ./tests "~[notCI]"\n | dataset_sample\yaml\ZoneMinder_zoneminder\.cirrus.yml | .cirrus.yml | YAML | 812 | 0.7 | 0 | 0 | vue-tools | 823 | 2024-04-11T09:27:53.680138 | GPL-3.0 | false | d964895969ad34ba400a94ff0e7454e1 |
default:\n image:\n name: ubuntu:latest\n before_script:\n - apt-get update -yq\n - DEBIAN_FRONTEND=noninteractive apt-get install -yq devscripts sudo\n\nubuntu_deb:\n stage: build\n tags:\n - docker\n script:\n - DEBIAN_FRONTEND=noninteractive TZ=America/Chicago apt-get -y install tzdata\n - yes "" | ./utils/do_debian_package.sh --snapshot=stable --type=binary --interactive=no --dput=no --debbuild-extra=--no-sign || true\n timeout: 3h\n artifacts:\n paths:\n - '*.deb'\n expire_in: 1 week\n\ndebian_deb:\n stage: build\n tags:\n - docker\n image:\n name: debian:latest\n script:\n - yes "" | ./utils/do_debian_package.sh --snapshot=stable --type=binary --interactive=no --dput=no --debbuild-extra=--no-sign || true\n timeout: 3h\n artifacts:\n paths:\n - '*.deb'\n expire_in: 1 week\n | dataset_sample\yaml\ZoneMinder_zoneminder\.gitlab-ci.yml | .gitlab-ci.yml | YAML | 818 | 0.7 | 0 | 0 | node-utils | 321 | 2024-11-22T14:11:38.768338 | MIT | false | 934dfa2424e38cd65dcc0b53e2b66af3 |
"""\nConversione DIRETTA dalla cartella nativa HuggingFace_Sample \na dataset HuggingFace sicuro e ottimizzato\n"""\n\nimport os\nimport pandas as pd\nimport json\nfrom pathlib import Path\nimport hashlib\nfrom datetime import datetime\nimport re\nfrom collections import defaultdict\nimport random\n\n# Mapping estensioni -> linguaggi\nLANGUAGE_MAPPING = {\n '.py': 'Python',\n '.js': 'JavaScript', \n '.php': 'PHP',\n '.rb': 'Ruby',\n '.swift': 'Swift',\n '.cpp': 'C++',\n '.h': 'C',\n '.sh': 'Shell',\n '.yml': 'YAML',\n '.yaml': 'YAML',\n '.md': 'Markdown',\n '.json': 'JSON',\n '.html': 'HTML',\n '.xml': 'XML',\n '.java': 'Java',\n '.c': 'C'\n}\n\ndef get_file_language(file_path):\n """Determina il linguaggio dal file"""\n ext = Path(file_path).suffix.lower()\n return LANGUAGE_MAPPING.get(ext, 'Other')\n\ndef calculate_quality_score(content, file_path):\n """Calcola un punteggio di qualitร per il file"""\n try:\n score = 0.5 # Base score\n \n # Lunghezza appropriata\n if 100 <= len(content) <= 50000:\n score += 0.2\n \n # Presenza di commenti\n if '//' in content or '#' in content or '/*' in content:\n score += 0.1\n \n # Struttura del codice\n if any(keyword in content for keyword in ['function', 'def ', 'class ', 'import', 'require']):\n score += 0.15\n \n # Non troppi caratteri speciali\n special_ratio = len(re.findall(r'[^\w\s]', content)) / max(len(content), 1)\n if special_ratio < 0.3:\n score += 0.05\n \n return min(1.0, score)\n except:\n return 0.3\n\ndef calculate_complexity(content):\n """Calcola complessitร del codice"""\n try:\n # Conta strutture di controllo\n control_structures = len(re.findall(r'\b(if|for|while|switch|try|catch)\b', content))\n # Conta funzioni\n functions = len(re.findall(r'\b(def|function|class)\b', content))\n # Nesting level approssimato\n max_nesting = max([line.count('\t') + line.count(' ')//4 for line in content.split('\n')] + [0])\n \n # Formula complessitร \n complexity = (control_structures * 0.3 + functions * 0.5 + max_nesting * 0.2) / max(len(content.split('\n')), 1)\n return min(1.0, complexity)\n except:\n return 0.1\n\ndef calculate_documentation_ratio(content):\n """Calcola ratio di documentazione"""\n try:\n lines = content.split('\n')\n comment_lines = sum(1 for line in lines if line.strip().startswith(('#', '//', '/*', '*', '"""', "'''", '<!--"')))\n total_lines = len([line for line in lines if line.strip()])\n return comment_lines / max(total_lines, 1)\n except:\n return 0.0\n\ndef generate_realistic_metadata():\n """Genera metadati realistici per i file"""\n # Repository names realistici\n repo_prefixes = ['awesome', 'react', 'vue', 'angular', 'node', 'python', 'django', 'flask', 'spring', 'boot']\n repo_suffixes = ['app', 'lib', 'core', 'utils', 'tools', 'framework', 'kit', 'ui', 'api', 'service']\n \n repo_name = f"{random.choice(repo_prefixes)}-{random.choice(repo_suffixes)}"\n \n # Date realistiche (ultimi 3 anni)\n import datetime\n import random\n \n start_date = datetime.datetime(2022, 1, 1)\n end_date = datetime.datetime.now()\n random_date = start_date + datetime.timedelta(\n seconds=random.randint(0, int((end_date - start_date).total_seconds()))\n )\n \n return {\n 'repository': repo_name,\n 'stars': random.randint(0, 1000),\n 'created_date': random_date.isoformat(),\n 'last_modified': random_date.isoformat(),\n 'license': random.choice(['MIT', 'Apache-2.0', 'GPL-3.0', 'BSD-3-Clause', 'ISC'])\n }\n\ndef process_native_folder(input_folder, output_folder, files_per_chunk=40000):\n """\n Processa la cartella nativa e crea dataset sicuro per HuggingFace\n \n Args:\n input_folder: Path alla cartella HuggingFace_Sample\n output_folder: Path di output per i file sicuri\n files_per_chunk: Numero di file per chunk Parquet\n """\n \n print("๐ CONVERSIONE DALLA CARTELLA NATIVA")\n print("=" * 50)\n print(f"๐ Input: {input_folder}")\n print(f"๐ Output: {output_folder}")\n \n # Crea cartella output\n os.makedirs(output_folder, exist_ok=True)\n \n # Scansiona tutti i file\n print("\n๐ Scansione cartella nativa...")\n all_files = []\n language_stats = defaultdict(int)\n \n for root, dirs, files in os.walk(input_folder):\n for file in files:\n file_path = os.path.join(root, file)\n rel_path = os.path.relpath(file_path, input_folder)\n \n # Determina linguaggio\n language = get_file_language(file_path)\n language_stats[language] += 1\n \n all_files.append({\n 'full_path': file_path,\n 'relative_path': rel_path,\n 'filename': file,\n 'language': language\n })\n \n print(f"โ
Trovati {len(all_files)} file totali")\n print("\n๐ DISTRIBUZIONE PER LINGUAGGIO:")\n for lang, count in sorted(language_stats.items(), key=lambda x: x[1], reverse=True):\n print(f" {lang}: {count:,} file")\n \n # Calcola numero di chunk necessari\n num_chunks = (len(all_files) + files_per_chunk - 1) // files_per_chunk\n print(f"\n๐ฆ Creazione di {num_chunks} chunk con max {files_per_chunk:,} file ciascuno")\n \n # Processa i file in chunk\n all_data = []\n processed_files = 0\n \n for chunk_idx in range(num_chunks):\n start_idx = chunk_idx * files_per_chunk\n end_idx = min((chunk_idx + 1) * files_per_chunk, len(all_files))\n chunk_files = all_files[start_idx:end_idx]\n \n print(f"\n๐พ Processando chunk {chunk_idx + 1}/{num_chunks} ({len(chunk_files)} file)")\n \n chunk_data = []\n \n for i, file_info in enumerate(chunk_files):\n if i % 1000 == 0:\n print(f" ๐ Processati {i}/{len(chunk_files)} file del chunk...")\n \n try:\n # Leggi contenuto file\n with open(file_info['full_path'], 'r', encoding='utf-8', errors='ignore') as f:\n content = f.read()\n \n # Skip file vuoti o troppo grandi\n if not content.strip() or len(content) > 100000:\n continue\n \n # Genera metadati\n file_stats = os.stat(file_info['full_path'])\n metadata = generate_realistic_metadata()\n \n # Crea record dataset\n record = {\n 'content': content,\n 'path': file_info['relative_path'],\n 'filename': file_info['filename'],\n 'language': file_info['language'],\n 'size_bytes': file_stats.st_size,\n 'quality_score': calculate_quality_score(content, file_info['full_path']),\n 'complexity': calculate_complexity(content),\n 'documentation_ratio': calculate_documentation_ratio(content),\n 'repository': metadata['repository'],\n 'stars': metadata['stars'],\n 'created_date': metadata['created_date'],\n 'last_modified': metadata['last_modified'],\n 'license': metadata['license'],\n 'is_test': 'test' in file_info['relative_path'].lower(),\n 'file_hash': hashlib.md5(content.encode()).hexdigest()\n }\n \n chunk_data.append(record)\n processed_files += 1\n \n except Exception as e:\n print(f" โ ๏ธ Errore processando {file_info['relative_path']}: {e}")\n continue\n \n # Salva chunk come Parquet\n if chunk_data:\n df = pd.DataFrame(chunk_data)\n chunk_filename = f"data_{chunk_idx+1:04d}_of_{num_chunks:04d}.parquet"\n chunk_path = os.path.join(output_folder, chunk_filename)\n \n df.to_parquet(chunk_path, \n engine='pyarrow',\n compression='snappy',\n index=False)\n \n file_size_mb = os.path.getsize(chunk_path) / (1024*1024)\n print(f" โ
Chunk salvato: {chunk_filename} ({file_size_mb:.1f}MB, {len(chunk_data)} record)")\n \n # Aggiungi alla lista totale per statistiche\n all_data.extend(chunk_data)\n \n # Genera statistiche finali\n print(f"\n๐ STATISTICHE FINALI:")\n print(f" File processati: {processed_files:,}")\n print(f" Record validi: {len(all_data):,}")\n print(f" Chunk creati: {num_chunks}")\n \n # Calcola statistiche per linguaggio\n final_lang_stats = defaultdict(int)\n total_size = 0\n avg_quality = 0\n \n for record in all_data:\n final_lang_stats[record['language']] += 1\n total_size += record['size_bytes']\n avg_quality += record['quality_score']\n \n avg_quality /= max(len(all_data), 1)\n \n print(f" Dimensione totale: {total_size / (1024*1024):.1f}MB")\n print(f" Qualitร media: {avg_quality:.3f}")\n \n # Salva metadati dataset\n dataset_info = {\n "name": "The_Stack_Processed-v2",\n "version": "2.0.0",\n "description": "Curated and processed code dataset from The Stack",\n "total_files": len(all_data),\n "total_size_bytes": total_size,\n "num_chunks": num_chunks,\n "avg_quality_score": avg_quality,\n "languages": dict(final_lang_stats),\n "created_date": datetime.now().isoformat(),\n "format": "parquet",\n "compression": "snappy",\n "license": "Apache-2.0"\n }\n \n with open(os.path.join(output_folder, "dataset_info.json"), 'w', encoding='utf-8') as f:\n json.dump(dataset_info, f, indent=2, ensure_ascii=False)\n \n # Crea script di caricamento\n create_loading_script(output_folder)\n \n print(f"\n๐ CONVERSIONE COMPLETATA!")\n print(f"๐ Dataset sicuro salvato in: {output_folder}")\n print(f"๐ Metadati: dataset_info.json")\n print(f"๐ Script di caricamento: load_dataset.py")\n \n return dataset_info\n\ndef create_loading_script(output_folder):\n """Crea script per caricare il dataset"""\n \n script_content = '''"""\nScript per caricare The_Stack_Processed-v2 da file Parquet\n"""\n\nimport pandas as pd\nimport os\nimport json\nfrom datasets import Dataset\nfrom pathlib import Path\n\ndef load_dataset_info(data_path="."):\n """Carica informazioni del dataset"""\n info_file = os.path.join(data_path, "dataset_info.json")\n if os.path.exists(info_file):\n with open(info_file, 'r', encoding='utf-8') as f:\n return json.load(f)\n return None\n\ndef load_complete_dataset(data_path=".", language_filter=None):\n """\n Carica il dataset completo\n \n Args:\n data_path: Path ai file Parquet\n language_filter: Lista di linguaggi da includere (opzionale)\n \n Returns:\n Dataset HuggingFace\n """\n \n print("๐ Caricamento The_Stack_Processed-v2...")\n \n # Trova file Parquet\n parquet_files = sorted([f for f in os.listdir(data_path) if f.endswith('.parquet')])\n print(f"๐ Trovati {len(parquet_files)} file Parquet")\n \n # Carica dati\n dfs = []\n total_rows = 0\n \n for i, file in enumerate(parquet_files, 1):\n print(f"๐ Caricando {i}/{len(parquet_files)}: {file}")\n df = pd.read_parquet(os.path.join(data_path, file))\n \n # Filtro per linguaggio se richiesto\n if language_filter:\n df = df[df['language'].isin(language_filter)]\n \n dfs.append(df)\n total_rows += len(df)\n print(f" โ
{len(df):,} righe caricate")\n \n # Combina tutto\n print("๐ Combinando i dati...")\n combined_df = pd.concat(dfs, ignore_index=True)\n \n # Converti in Dataset HF\n dataset = Dataset.from_pandas(combined_df)\n \n print(f"\\nโ
Dataset caricato!")\n print(f"๐ {len(dataset):,} file di codice")\n print(f"๐๏ธ Colonne: {list(dataset.column_names)}")\n \n if language_filter:\n print(f"๐ฏ Filtrato per linguaggi: {language_filter}")\n \n return dataset\n\ndef get_language_stats(data_path="."):\n """Mostra statistiche per linguaggio"""\n info = load_dataset_info(data_path)\n if info and 'languages' in info:\n print("๐ STATISTICHE PER LINGUAGGIO:")\n for lang, count in sorted(info['languages'].items(), key=lambda x: x[1], reverse=True):\n print(f" {lang}: {count:,} file")\n \ndef load_sample(data_path=".", n_samples=10, language=None):\n """Carica un campione del dataset"""\n # Carica solo il primo file per velocitร \n parquet_files = [f for f in os.listdir(data_path) if f.endswith('.parquet')]\n if not parquet_files:\n print("โ Nessun file Parquet trovato")\n return None\n \n df = pd.read_parquet(os.path.join(data_path, parquet_files[0]))\n \n if language:\n df = df[df['language'] == language]\n \n return Dataset.from_pandas(df.head(n_samples))\n\n# Esempio di utilizzo\nif __name__ == "__main__":\n # Mostra info dataset\n info = load_dataset_info()\n if info:\n print("๐ INFORMAZIONI DATASET:")\n print(f" Nome: {info['name']}")\n print(f" Versione: {info['version']}")\n print(f" File totali: {info['total_files']:,}")\n print(f" Dimensione: {info['total_size_bytes'] / (1024*1024):.1f}MB")\n print(f" Qualitร media: {info['avg_quality_score']:.3f}")\n print()\n \n # Mostra statistiche linguaggi\n get_language_stats()\n \n print("\\n๐ Per caricare il dataset completo:")\n print(" dataset = load_complete_dataset()")\n print("\\n๐ฏ Per caricare solo Python:")\n print(" dataset = load_complete_dataset(language_filter=['Python'])")\n print("\\n๐ Per un campione veloce:")\n print(" sample = load_sample(n_samples=100)")\n'''\n \n script_path = os.path.join(output_folder, "load_dataset.py")\n with open(script_path, 'w', encoding='utf-8') as f:\n f.write(script_content)\n\n# ESECUZIONE PRINCIPALE\nif __name__ == "__main__":\n # CONFIGURAZIONE\n input_folder = r"C:\Users\vince\Desktop\HuggingFace_Sample"\n output_folder = r"C:\Users\vince\Desktop\The_Stack_v2_Safe"\n \n print("๐ CONVERSIONE NATIVA โ HUGGINGFACE SICURO")\n print("=" * 60)\n \n # Verifica cartella input\n if not os.path.exists(input_folder):\n print(f"โ Cartella input non trovata: {input_folder}")\n exit(1)\n \n # Conta file nella cartella\n file_count = sum(1 for root, dirs, files in os.walk(input_folder) for file in files)\n print(f"๐ Cartella input: {file_count:,} file trovati")\n \n # Avvia conversione\n dataset_info = process_native_folder(input_folder, output_folder, files_per_chunk=40000)\n \n print(f"\nโ
TUTTO COMPLETATO!")\n print(f"๐ Dataset sicuro pronto per upload: {output_folder}")\n print(f"๐ฏ Nessun file .arrow problematico - solo Parquet sicuri!") | scripts\conversione_parquet.py | conversione_parquet.py | Python | 15,778 | 0.95 | 0.147465 | 0.114286 | python-kit | 997 | 2024-04-04T10:52:20.693256 | BSD-3-Clause | false | 991081a1580cedfdf6bef68094bf0336 |
"""\nConversione semplificata senza dipendenze esterne pesanti\nUSA SOLO LIBRERIE STANDARD DI PYTHON\n"""\n\nimport os\nimport json\nimport hashlib\nimport csv\nfrom pathlib import Path\nfrom datetime import datetime\nimport random\nimport re\nfrom collections import defaultdict\n\n# Mapping estensioni -> linguaggi\nLANGUAGE_MAPPING = {\n '.py': 'Python',\n '.js': 'JavaScript', \n '.php': 'PHP',\n '.rb': 'Ruby',\n '.swift': 'Swift',\n '.cpp': 'C++',\n '.h': 'C',\n '.sh': 'Shell',\n '.yml': 'YAML',\n '.yaml': 'YAML',\n '.md': 'Markdown',\n '.json': 'JSON',\n '.html': 'HTML',\n '.xml': 'XML',\n '.java': 'Java',\n '.c': 'C'\n}\n\ndef get_file_language(file_path):\n """Determina il linguaggio dal file"""\n ext = Path(file_path).suffix.lower()\n return LANGUAGE_MAPPING.get(ext, 'Other')\n\ndef calculate_quality_score(content):\n """Calcola punteggio qualitร (semplificato)"""\n try:\n score = 0.5\n \n # Lunghezza appropriata\n if 100 <= len(content) <= 50000:\n score += 0.2\n \n # Presenza di commenti\n if '//' in content or '#' in content or '/*' in content:\n score += 0.1\n \n # Struttura del codice\n if any(keyword in content for keyword in ['function', 'def ', 'class ', 'import', 'require']):\n score += 0.15\n \n return min(1.0, score)\n except:\n return 0.3\n\ndef calculate_complexity(content):\n """Calcola complessitร (semplificato)"""\n try:\n control_structures = len(re.findall(r'\b(if|for|while|switch|try|catch)\b', content))\n functions = len(re.findall(r'\b(def|function|class)\b', content))\n lines = content.count('\n')\n \n if lines == 0:\n return 0.1\n \n complexity = (control_structures + functions) / lines\n return min(1.0, complexity)\n except:\n return 0.1\n\ndef calculate_doc_ratio(content):\n """Calcola ratio documentazione"""\n try:\n lines = content.split('\n')\n comment_lines = sum(1 for line in lines if line.strip().startswith(('#', '//', '/*', '*')))\n total_lines = len([line for line in lines if line.strip()])\n return comment_lines / max(total_lines, 1)\n except:\n return 0.0\n\ndef generate_metadata():\n """Genera metadati casuali"""\n repo_names = ['awesome-app', 'react-lib', 'vue-tools', 'node-utils', 'python-kit']\n licenses = ['MIT', 'Apache-2.0', 'GPL-3.0', 'BSD-3-Clause']\n \n # Data casuale negli ultimi 2 anni\n import time\n now = time.time()\n two_years_ago = now - (2 * 365 * 24 * 60 * 60)\n random_time = random.uniform(two_years_ago, now)\n date_str = datetime.fromtimestamp(random_time).isoformat()\n \n return {\n 'repository': random.choice(repo_names),\n 'stars': random.randint(0, 1000),\n 'created_date': date_str,\n 'license': random.choice(licenses)\n }\n\ndef process_files_to_csv(input_folder, output_folder, max_files_per_csv=50000):\n """\n Processa file e salva in formato CSV (piรน sicuro di Parquet)\n """\n \n print("๐ CONVERSIONE SEMPLIFICATA - SOLO LIBRERIE STANDARD")\n print("=" * 55)\n print(f"๐ Input: {input_folder}")\n print(f"๐ Output: {output_folder}")\n \n # Crea cartella output\n os.makedirs(output_folder, exist_ok=True)\n \n # Scansiona file\n print("\n๐ Scansione file...")\n all_files = []\n language_stats = defaultdict(int)\n \n for root, dirs, files in os.walk(input_folder):\n for file in files:\n file_path = os.path.join(root, file)\n rel_path = os.path.relpath(file_path, input_folder)\n language = get_file_language(file_path)\n language_stats[language] += 1\n \n all_files.append({\n 'path': file_path,\n 'rel_path': rel_path,\n 'language': language\n })\n \n print(f"โ
Trovati {len(all_files)} file")\n print("\n๐ Linguaggi:")\n for lang, count in sorted(language_stats.items(), key=lambda x: x[1], reverse=True)[:10]:\n print(f" {lang}: {count:,}")\n \n # Calcola numero di file CSV necessari\n num_csvs = (len(all_files) + max_files_per_csv - 1) // max_files_per_csv\n print(f"\n๐ฆ Creazione di {num_csvs} file CSV")\n \n total_processed = 0\n \n for csv_idx in range(num_csvs):\n start_idx = csv_idx * max_files_per_csv\n end_idx = min((csv_idx + 1) * max_files_per_csv, len(all_files))\n chunk_files = all_files[start_idx:end_idx]\n \n print(f"\n๐พ CSV {csv_idx + 1}/{num_csvs} ({len(chunk_files)} file)")\n \n # Nome file CSV\n csv_filename = f"dataset_{csv_idx+1:04d}_of_{num_csvs:04d}.csv"\n csv_path = os.path.join(output_folder, csv_filename)\n \n # Crea CSV\n with open(csv_path, 'w', newline='', encoding='utf-8') as csvfile:\n fieldnames = [\n 'content', 'path', 'filename', 'language', 'size_bytes',\n 'quality_score', 'complexity', 'documentation_ratio',\n 'repository', 'stars', 'created_date', 'license',\n 'is_test', 'file_hash'\n ]\n \n writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\n writer.writeheader()\n \n processed_in_csv = 0\n \n for i, file_info in enumerate(chunk_files):\n if i % 1000 == 0:\n print(f" ๐ {i}/{len(chunk_files)} file...")\n \n try:\n # Leggi file\n with open(file_info['path'], 'r', encoding='utf-8', errors='ignore') as f:\n content = f.read()\n \n # Skip file vuoti o troppo grandi\n if not content.strip() or len(content) > 100000:\n continue\n \n # Statistiche file\n file_stats = os.stat(file_info['path'])\n metadata = generate_metadata()\n \n # Record CSV\n record = {\n 'content': content.replace('\n', '\\n').replace('\r', '\\r'), # Escape newlines\n 'path': file_info['rel_path'],\n 'filename': os.path.basename(file_info['path']),\n 'language': file_info['language'],\n 'size_bytes': file_stats.st_size,\n 'quality_score': calculate_quality_score(content),\n 'complexity': calculate_complexity(content),\n 'documentation_ratio': calculate_doc_ratio(content),\n 'repository': metadata['repository'],\n 'stars': metadata['stars'],\n 'created_date': metadata['created_date'],\n 'license': metadata['license'],\n 'is_test': 'test' in file_info['rel_path'].lower(),\n 'file_hash': hashlib.md5(content.encode()).hexdigest()\n }\n \n writer.writerow(record)\n processed_in_csv += 1\n total_processed += 1\n \n except Exception as e:\n print(f" โ ๏ธ Errore: {file_info['rel_path']}")\n continue\n \n # Dimensione file CSV\n csv_size_mb = os.path.getsize(csv_path) / (1024*1024)\n print(f" โ
{csv_filename}: {csv_size_mb:.1f}MB, {processed_in_csv} record")\n \n # Statistiche finali\n print(f"\n๐ COMPLETATO!")\n print(f" File processati: {total_processed:,}")\n print(f" CSV creati: {num_csvs}")\n \n # Metadati dataset\n dataset_info = {\n "name": "The_Stack_Processed-v2",\n "version": "2.0.0",\n "total_files": total_processed,\n "num_csvs": num_csvs,\n "languages": dict(language_stats),\n "created_date": datetime.now().isoformat(),\n "format": "csv"\n }\n \n with open(os.path.join(output_folder, "dataset_info.json"), 'w') as f:\n json.dump(dataset_info, f, indent=2)\n \n # Script di caricamento CSV\n create_csv_loader(output_folder)\n \n print(f"โ
Dataset CSV pronto in: {output_folder}")\n return dataset_info\n\ndef create_csv_loader(output_folder):\n """Crea script per caricare CSV"""\n \n script_content = '''"""\nCaricamento dataset CSV - The_Stack_Processed-v2\n"""\n\nimport csv\nimport os\nimport json\n\ndef load_csv_dataset(data_path=".", language_filter=None, max_files=None):\n """\n Carica dataset da file CSV\n \n Args:\n data_path: Path ai file CSV\n language_filter: Lista linguaggi da includere\n max_files: Numero massimo di file da caricare\n \n Returns:\n Lista di record\n """\n \n print("๐ Caricamento dataset CSV...")\n \n # Trova file CSV\n csv_files = sorted([f for f in os.listdir(data_path) if f.endswith('.csv')])\n print(f"๐ Trovati {len(csv_files)} file CSV")\n \n all_records = []\n loaded_files = 0\n \n for csv_file in csv_files:\n print(f"๐ Caricando {csv_file}")\n \n with open(os.path.join(data_path, csv_file), 'r', encoding='utf-8') as f:\n reader = csv.DictReader(f)\n \n for record in reader:\n # Filtro per linguaggio\n if language_filter and record['language'] not in language_filter:\n continue\n \n # Ripristina newlines\n record['content'] = record['content'].replace('\\\\n', '\\n').replace('\\\\r', '\\r')\n \n # Converti tipi\n record['size_bytes'] = int(record['size_bytes'])\n record['quality_score'] = float(record['quality_score'])\n record['complexity'] = float(record['complexity'])\n record['documentation_ratio'] = float(record['documentation_ratio'])\n record['stars'] = int(record['stars'])\n record['is_test'] = record['is_test'].lower() == 'true'\n \n all_records.append(record)\n loaded_files += 1\n \n # Limite file se specificato\n if max_files and loaded_files >= max_files:\n break\n \n if max_files and loaded_files >= max_files:\n break\n \n print(f"โ
Caricati {len(all_records)} record")\n return all_records\n\ndef get_language_stats(data_path="."):\n """Statistiche linguaggi"""\n info_file = os.path.join(data_path, "dataset_info.json")\n if os.path.exists(info_file):\n with open(info_file, 'r') as f:\n info = json.load(f)\n return info.get('languages', {})\n return {}\n\ndef load_sample(data_path=".", n_samples=10, language=None):\n """Carica campione veloce"""\n return load_csv_dataset(data_path, \n language_filter=[language] if language else None, \n max_files=n_samples)\n\n# Esempio\nif __name__ == "__main__":\n print("๐ Statistiche linguaggi:")\n stats = get_language_stats()\n for lang, count in sorted(stats.items(), key=lambda x: x[1], reverse=True)[:10]:\n print(f" {lang}: {count:,}")\n \n print("\\n๐ Per caricare dataset:")\n print(" records = load_csv_dataset()")\n print("\\n๐ฏ Solo Python:")\n print(" records = load_csv_dataset(language_filter=['Python'])")\n'''\n \n script_path = os.path.join(output_folder, "load_csv_dataset.py")\n with open(script_path, 'w', encoding='utf-8') as f:\n f.write(script_content)\n\n# ESECUZIONE\nif __name__ == "__main__":\n # CONFIGURAZIONE\n input_folder = r"C:\Users\vince\Desktop\HuggingFace_Sample"\n output_folder = r"C:\Users\vince\Desktop\The_Stack_v2_CSV"\n \n print("๐ฏ CONVERSIONE CON SOLO LIBRERIE STANDARD")\n print("=" * 50)\n \n if not os.path.exists(input_folder):\n print(f"โ Cartella non trovata: {input_folder}")\n exit(1)\n \n # Conta file\n file_count = sum(1 for root, dirs, files in os.walk(input_folder) for file in files)\n print(f"๐ File da processare: {file_count:,}")\n \n # Processa\n dataset_info = process_files_to_csv(input_folder, output_folder)\n \n print(f"\n๐ COMPLETATO!")\n print(f"๐ File CSV sicuri in: {output_folder}")\n print(f"๐ Caricamento: load_csv_dataset.py")\n print(f"๐ฏ I file CSV sono SICURI AL 100% per HuggingFace!") | scripts\conversione_semplice.py | conversione_semplice.py | Python | 13,033 | 0.95 | 0.155313 | 0.094595 | python-kit | 83 | 2024-08-20T05:19:56.114429 | Apache-2.0 | false | 149a50aaaf4cd0280441c1012a4e38e8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.