Task 38578402

Name wu_68a804a5-GIANNI_GPROTO7-0-1-RND8465_0
Workunit 31543859
Created 26 Sep 2025, 17:57:39 UTC
Sent 26 Sep 2025, 18:00:21 UTC
Report deadline 1 Oct 2025, 18:00:21 UTC
Received 26 Sep 2025, 18:04:39 UTC
Server state Over
Outcome Computation error
Client state Compute error
Exit status 195 (0x000000C3) EXIT_CHILD_FAILED
Computer ID 611060
Run time 2 min 36 sec
CPU time 37 sec
Validate state Invalid
Credit 0.00
Device peak FLOPS 82,583.17 GFLOPS
Application version LLM: LLMs for chemistry v1.01 (cuda124L)
windows_x86_64
Peak working set size 7.97 GB
Peak swap size 28.85 GB
Peak disk usage 6.35 GB

Stderr output

<core_client_version>8.2.4</core_client_version>
<![CDATA[
<message>
The operating system cannot run (null).
 (0xc3) - exit code 195 (0xc3)</message>
<stderr_txt>
11:01:55 (16080): wrapper (7.9.26016): starting
11:01:55 (16080): wrapper: running Library/usr/bin/tar.exe (xjvf input.tar.bz2)
tasks.json
run.bat
conf.yaml
main_generation-0.1.0-py3-none-any.whl
run.sh
11:01:56 (16080): Library/usr/bin/tar.exe exited; CPU time 0.000000
11:01:56 (16080): wrapper: running C:/Windows/system32/cmd.exe (/c call Scripts\activate.bat && Scripts\conda-unpack.exe && run.bat)

Generating train split: 0 examples [00:00, ? examples/s]
Generating train split: 2500 examples [00:00, 356864.85 examples/s]
[W926 11:02:34.000000000 socket.cpp:759] [c10d] The client socket has failed to connect to [ct-office]:51670 (system error: 10049 - The requested address is not valid in its context.).

Loading safetensors checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]

Loading safetensors checkpoint shards:  50% Completed | 1/2 [00:01<00:01,  1.81s/it]

Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:03<00:00,  1.76s/it]

Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:03<00:00,  1.77s/it]


Loading safetensors checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]

Loading safetensors checkpoint shards:  50% Completed | 1/2 [00:01<00:01,  1.79s/it]

Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:03<00:00,  1.83s/it]

Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:03<00:00,  1.83s/it]


Capturing CUDA graph shapes:   0%|          | 0/35 [00:00<?, ?it/s]
Capturing CUDA graph shapes:   3%|2         | 1/35 [00:00<00:23,  1.44it/s]
Capturing CUDA graph shapes:   6%|5         | 2/35 [00:03<01:02,  1.88s/it]
Capturing CUDA graph shapes:   9%|8         | 3/35 [00:04<00:41,  1.29s/it]
Capturing CUDA graph shapes:  11%|#1        | 4/35 [00:04<00:31,  1.02s/it]
Capturing CUDA graph shapes:  14%|#4        | 5/35 [00:05<00:25,  1.16it/s]
Capturing CUDA graph shapes:  17%|#7        | 6/35 [00:05<00:22,  1.30it/s]
Capturing CUDA graph shapes:  20%|##        | 7/35 [00:06<00:19,  1.41it/s]
Capturing CUDA graph shapes:  23%|##2       | 8/35 [00:06<00:18,  1.49it/s]
Capturing CUDA graph shapes:  26%|##5       | 9/35 [00:07<00:16,  1.56it/s]
Capturing CUDA graph shapes:  29%|##8       | 10/35 [00:08<00:15,  1.61it/s]
Capturing CUDA graph shapes:  31%|###1      | 11/35 [00:08<00:14,  1.65it/s]
Capturing CUDA graph shapes:  34%|###4      | 12/35 [00:09<00:13,  1.66it/s]
Capturing CUDA graph shapes:  37%|###7      | 13/35 [00:09<00:13,  1.67it/s]
Capturing CUDA graph shapes:  40%|####      | 14/35 [00:10<00:12,  1.65it/s]
Capturing CUDA graph shapes:  43%|####2     | 15/35 [00:11<00:11,  1.67it/s]
Capturing CUDA graph shapes:  46%|####5     | 16/35 [00:11<00:11,  1.66it/s]
Capturing CUDA graph shapes:  49%|####8     | 17/35 [00:12<00:10,  1.66it/s]
Capturing CUDA graph shapes:  49%|####8     | 17/35 [00:12<00:13,  1.33it/s]
[rank0]: Traceback (most recent call last):
[rank0]:   File "wheel_contents/aiengine/main_generation.py", line 87, in <module>
[rank0]:   File "wheel_contents/aiengine/model.py", line 36, in __init__
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\utils.py", line 1096, in inner
[rank0]:     return fn(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\entrypoints\llm.py", line 243, in __init__
[rank0]:     self.llm_engine = LLMEngine.from_engine_args(
[rank0]:                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\engine\llm_engine.py", line 521, in from_engine_args
[rank0]:     return engine_cls.from_vllm_config(
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\engine\llm_engine.py", line 497, in from_vllm_config
[rank0]:     return cls(
[rank0]:            ^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\engine\llm_engine.py", line 284, in __init__
[rank0]:     self._initialize_kv_caches()
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\engine\llm_engine.py", line 446, in _initialize_kv_caches
[rank0]:     self.model_executor.initialize_cache(num_gpu_blocks, num_cpu_blocks)
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\executor\executor_base.py", line 123, in initialize_cache
[rank0]:     self.collective_rpc("initialize_cache",
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\executor\uniproc_executor.py", line 56, in collective_rpc
[rank0]:     answer = run_method(self.driver_worker, method, args, kwargs)
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\utils.py", line 2359, in run_method
[rank0]:     return func(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\worker\worker.py", line 309, in initialize_cache
[rank0]:     self._warm_up_model()
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\worker\worker.py", line 339, in _warm_up_model
[rank0]:     self.model_runner.capture_model(self.gpu_cache)
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
[rank0]:     return func(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\worker\model_runner.py", line 1585, in capture_model
[rank0]:     graph_runner.capture(**capture_inputs)
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\worker\model_runner.py", line 1954, in capture
[rank0]:     self.model(
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\model_executor\models\qwen2.py", line 462, in forward
[rank0]:     hidden_states = self.model(input_ids, positions, intermediate_tensors,
[rank0]:                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\compilation\decorators.py", line 172, in __call__
[rank0]:     return self.forward(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\model_executor\models\qwen2.py", line 338, in forward
[rank0]:     hidden_states, residual = layer(
[rank0]:                               ^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\model_executor\models\qwen2.py", line 243, in forward
[rank0]:     hidden_states = self.self_attn(
[rank0]:                     ^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\model_executor\models\qwen2.py", line 178, in forward
[rank0]:     output, _ = self.o_proj(attn_output)
[rank0]:                 ^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\lora\layers.py", line 921, in forward
[rank0]:     output_parallel = self.apply(input_parallel)
[rank0]:                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\lora\layers.py", line 413, in apply
[rank0]:     self.punica_wrapper.add_lora_linear(output, x, self.lora_a_stacked,
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\lora\punica_wrapper\punica_gpu.py", line 232, in add_lora_linear
[rank0]:     self.add_shrink(
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\lora\punica_wrapper\punica_gpu.py", line 94, in add_shrink
[rank0]:     lora_shrink(
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\_ops.py", line 1123, in __call__
[rank0]:     return self._op(*args, **(kwargs or {}))
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
[rank0]:     return func(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\lora\ops\triton_ops\lora_shrink.py", line 187, in _lora_shrink
[rank0]:     _lora_shrink_kernel[grid](
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\runtime\jit.py", line 330, in <lambda>
[rank0]:     return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
[rank0]:                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\runtime\jit.py", line 623, in run
[rank0]:     kernel = self.compile(
[rank0]:              ^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\compiler\compiler.py", line 283, in compile
[rank0]:     next_module = compile_ir(module, metadata)
[rank0]:                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\backends\nvidia\compiler.py", line 403, in <lambda>
[rank0]:     stages["llir"] = lambda src, metadata: self.make_llir(src, metadata, options, self.capability)
[rank0]:                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\backends\nvidia\compiler.py", line 325, in make_llir
[rank0]:     llvm.link_extern_libs(llvm_mod, paths)
[rank0]: ValueError: Failed to parse library at C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\backends\nvidia\lib\libdevice.10.bc
11:03:03 (16080): C:/Windows/system32/cmd.exe exited; CPU time 37.734375
11:03:03 (16080): app exit status: 0x16
11:03:03 (16080): called boinc_finish(195)
0 bytes in 0 Free Blocks.
460 bytes in 8 Normal Blocks.
1144 bytes in 1 CRT Blocks.
0 bytes in 0 Ignore Blocks.
0 bytes in 0 Client Blocks.
Largest number used: 0 bytes.
Total allocations: 1285037 bytes.
Dumping objects ->
{1601455} normal block at 0x000002B000189FB0, 48 bytes long.
 Data: <PATH=C:\ProgramD> 50 41 54 48 3D 43 3A 5C 50 72 6F 67 72 61 6D 44 
{1601444} normal block at 0x000002B0001644E0, 48 bytes long.
 Data: <HOME=C:\ProgramD> 48 4F 4D 45 3D 43 3A 5C 50 72 6F 67 72 61 6D 44 
{1601433} normal block at 0x000002B000164470, 48 bytes long.
 Data: <TMP=C:\ProgramDa> 54 4D 50 3D 43 3A 5C 50 72 6F 67 72 61 6D 44 61 
{1601422} normal block at 0x000002B000164390, 48 bytes long.
 Data: <TEMP=C:\ProgramD> 54 45 4D 50 3D 43 3A 5C 50 72 6F 67 72 61 6D 44 
{1601411} normal block at 0x000002B000164320, 48 bytes long.
 Data: <TMPDIR=C:\Progra> 54 4D 50 44 49 52 3D 43 3A 5C 50 72 6F 67 72 61 
{1601380} normal block at 0x000002B001FA9110, 64 bytes long.
 Data: <PATH=C:\ProgramD> 50 41 54 48 3D 43 3A 5C 50 72 6F 67 72 61 6D 44 
{1601369} normal block at 0x000002B001E58EA0, 102 bytes long.
 Data: <<project_prefere> 3C 70 72 6F 6A 65 63 74 5F 70 72 65 66 65 72 65 
..\api\boinc_api.cpp(309) : {1601366} normal block at 0x000002B00015F1F0, 8 bytes long.
 Data: <  W &#176;   > 00 00 57 00 B0 02 00 00 
{1600628} normal block at 0x000002B001E59A50, 102 bytes long.
 Data: <<project_prefere> 3C 70 72 6F 6A 65 63 74 5F 70 72 65 66 65 72 65 
{1599912} normal block at 0x000002B00015F830, 8 bytes long.
 Data: <&#192;&#162;  &#176;   > C0 A2 18 00 B0 02 00 00 
..\zip\boinc_zip.cpp(122) : {303} normal block at 0x000002B000151680, 260 bytes long.
 Data: <                > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
{288} normal block at 0x000002B00014CFC0, 80 bytes long.
 Data: </c call Scripts\> 2F 63 20 63 61 6C 6C 20 53 63 72 69 70 74 73 5C 
{287} normal block at 0x000002B0001656E0, 16 bytes long.
 Data: <x_  &#176;           > 78 5F 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{286} normal block at 0x000002B000165910, 16 bytes long.
 Data: <P_  &#176;           > 50 5F 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{285} normal block at 0x000002B000165AA0, 16 bytes long.
 Data: <(_  &#176;           > 28 5F 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{284} normal block at 0x000002B0001658C0, 16 bytes long.
 Data: < _  &#176;           > 00 5F 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{283} normal block at 0x000002B0001652D0, 16 bytes long.
 Data: <&#216;^  &#176;           > D8 5E 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{282} normal block at 0x000002B000165230, 16 bytes long.
 Data: <&#176;^  &#176;           > B0 5E 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{281} normal block at 0x000002B000164400, 48 bytes long.
 Data: <ComSpec=C:\Windo> 43 6F 6D 53 70 65 63 3D 43 3A 5C 57 69 6E 64 6F 
{280} normal block at 0x000002B000165960, 16 bytes long.
 Data: <&#168;&#182;  &#176;           > A8 B6 14 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{279} normal block at 0x000002B0001606A0, 32 bytes long.
 Data: <SystemRoot=C:\Wi> 53 79 73 74 65 6D 52 6F 6F 74 3D 43 3A 5C 57 69 
{278} normal block at 0x000002B0001655A0, 16 bytes long.
 Data: < &#182;  &#176;           > 80 B6 14 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{276} normal block at 0x000002B0001650F0, 16 bytes long.
 Data: <X&#182;  &#176;           > 58 B6 14 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{275} normal block at 0x000002B0001653C0, 16 bytes long.
 Data: <0&#182;  &#176;           > 30 B6 14 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{274} normal block at 0x000002B0001650A0, 16 bytes long.
 Data: < &#182;  &#176;           > 08 B6 14 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{273} normal block at 0x000002B000165000, 16 bytes long.
 Data: <&#224;&#181;  &#176;           > E0 B5 14 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{272} normal block at 0x000002B000165370, 16 bytes long.
 Data: <&#184;&#181;  &#176;           > B8 B5 14 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{271} normal block at 0x000002B000160AC0, 32 bytes long.
 Data: <CUDA_DEVICE=0 PU> 43 55 44 41 5F 44 45 56 49 43 45 3D 30 00 50 55 
{270} normal block at 0x000002B000165050, 16 bytes long.
 Data: < &#181;  &#176;           > 90 B5 14 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{269} normal block at 0x000002B00014B590, 320 bytes long.
 Data: <PP  &#176;   &#192;   &#176;   > 50 50 16 00 B0 02 00 00 C0 0A 16 00 B0 02 00 00 
{268} normal block at 0x000002B000164FB0, 16 bytes long.
 Data: < ^  &#176;           > 90 5E 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{267} normal block at 0x000002B0001657D0, 16 bytes long.
 Data: <h^  &#176;           > 68 5E 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{266} normal block at 0x000002B000161060, 32 bytes long.
 Data: <C:/Windows/syste> 43 3A 2F 57 69 6E 64 6F 77 73 2F 73 79 73 74 65 
{265} normal block at 0x000002B000165780, 16 bytes long.
 Data: <@^  &#176;           > 40 5E 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{264} normal block at 0x000002B000160BE0, 32 bytes long.
 Data: <xjvf input.tar.b> 78 6A 76 66 20 69 6E 70 75 74 2E 74 61 72 2E 62 
{263} normal block at 0x000002B0001654B0, 16 bytes long.
 Data: < ]  &#176;           > 88 5D 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{262} normal block at 0x000002B000165820, 16 bytes long.
 Data: <`]  &#176;           > 60 5D 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{261} normal block at 0x000002B000165280, 16 bytes long.
 Data: <8]  &#176;           > 38 5D 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{260} normal block at 0x000002B0001651E0, 16 bytes long.
 Data: < ]  &#176;           > 10 5D 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{259} normal block at 0x000002B000165190, 16 bytes long.
 Data: <&#232;\  &#176;           > E8 5C 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{258} normal block at 0x000002B0001655F0, 16 bytes long.
 Data: <&#192;\  &#176;           > C0 5C 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{256} normal block at 0x000002B000164F60, 16 bytes long.
 Data: <0F  &#176;           > 30 46 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{255} normal block at 0x000002B000164630, 40 bytes long.
 Data: <`O  &#176;    &#145;&#250; &#176;   > 60 4F 16 00 B0 02 00 00 10 91 FA 01 B0 02 00 00 
{254} normal block at 0x000002B000165870, 16 bytes long.
 Data: <&#160;\  &#176;           > A0 5C 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{253} normal block at 0x000002B000165460, 16 bytes long.
 Data: <x\  &#176;           > 78 5C 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{252} normal block at 0x000002B0001608E0, 32 bytes long.
 Data: <Library/usr/bin/> 4C 69 62 72 61 72 79 2F 75 73 72 2F 62 69 6E 2F 
{251} normal block at 0x000002B000165730, 16 bytes long.
 Data: <P\  &#176;           > 50 5C 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{250} normal block at 0x000002B000165C50, 992 bytes long.
 Data: <0W  &#176;   &#224;   &#176;   > 30 57 16 00 B0 02 00 00 E0 08 16 00 B0 02 00 00 
{94} normal block at 0x000002B000160B80, 32 bytes long.
 Data: <windows_x86_64__> 77 69 6E 64 6F 77 73 5F 78 38 36 5F 36 34 5F 5F 
{93} normal block at 0x000002B00015FF10, 16 bytes long.
 Data: <&#192;E  &#176;           > C0 45 16 00 B0 02 00 00 00 00 00 00 00 00 00 00 
{92} normal block at 0x000002B0001645C0, 40 bytes long.
 Data: < &#255;  &#176;       &#176;   > 10 FF 15 00 B0 02 00 00 80 0B 16 00 B0 02 00 00 
{71} normal block at 0x000002B00015F510, 16 bytes long.
 Data: < &#234;}&#170;&#246;           > 80 EA 7D AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{70} normal block at 0x000002B00015FA10, 16 bytes long.
 Data: <@&#233;}&#170;&#246;           > 40 E9 7D AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{69} normal block at 0x000002B00015FEC0, 16 bytes long.
 Data: <&#248;Wz&#170;&#246;           > F8 57 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{68} normal block at 0x000002B00015F290, 16 bytes long.
 Data: <&#216;Wz&#170;&#246;           > D8 57 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{67} normal block at 0x000002B00015FE20, 16 bytes long.
 Data: <P z&#170;&#246;           > 50 04 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{66} normal block at 0x000002B00015F790, 16 bytes long.
 Data: <0 z&#170;&#246;           > 30 04 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{65} normal block at 0x000002B00015F6A0, 16 bytes long.
 Data: <&#224; z&#170;&#246;           > E0 02 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{64} normal block at 0x000002B00015F9C0, 16 bytes long.
 Data: <  z&#170;&#246;           > 10 04 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{63} normal block at 0x000002B00015FDD0, 16 bytes long.
 Data: <p z&#170;&#246;           > 70 04 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{62} normal block at 0x000002B00015FBA0, 16 bytes long.
 Data: < &#192;x&#170;&#246;           > 18 C0 78 AA F6 7F 00 00 00 00 00 00 00 00 00 00 
Object dump complete.

</stderr_txt>
]]>


©2025 Universitat Pompeu Fabra