Task 38578066

Name wu_356cfd54-GIANNI_GPROTO7-0-1-RND8485_0
Workunit 31543546
Created 25 Sep 2025, 12:56:14 UTC
Sent 25 Sep 2025, 12:56:28 UTC
Report deadline 30 Sep 2025, 12:56:28 UTC
Received 25 Sep 2025, 13:00:56 UTC
Server state Over
Outcome Computation error
Client state Compute error
Exit status 195 (0x000000C3) EXIT_CHILD_FAILED
Computer ID 611060
Run time 2 min 26 sec
CPU time 26 sec
Validate state Invalid
Credit 0.00
Device peak FLOPS 82,583.17 GFLOPS
Application version LLM: LLMs for chemistry v1.01 (cuda124L)
windows_x86_64
Peak working set size 7.93 GB
Peak swap size 28.55 GB
Peak disk usage 6.35 GB

Stderr output

<core_client_version>8.2.4</core_client_version>
<![CDATA[
<message>
The operating system cannot run (null).
 (0xc3) - exit code 195 (0xc3)</message>
<stderr_txt>
05:58:02 (13916): wrapper (7.9.26016): starting
05:58:02 (13916): wrapper: running Library/usr/bin/tar.exe (xjvf input.tar.bz2)
tasks.json
run.bat
conf.yaml
main_generation-0.1.0-py3-none-any.whl
run.sh
05:58:03 (13916): Library/usr/bin/tar.exe exited; CPU time 0.000000
05:58:03 (13916): wrapper: running C:/Windows/system32/cmd.exe (/c call Scripts\activate.bat && Scripts\conda-unpack.exe && run.bat)

Generating train split: 0 examples [00:00, ? examples/s]
Generating train split: 2500 examples [00:00, 356755.58 examples/s]
[W925 05:58:42.000000000 socket.cpp:759] [c10d] The client socket has failed to connect to [ct-office]:61531 (system error: 10049 - The requested address is not valid in its context.).

Loading safetensors checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]

Loading safetensors checkpoint shards:  50% Completed | 1/2 [00:01<00:01,  1.70s/it]

Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:03<00:00,  1.69s/it]

Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:03<00:00,  1.69s/it]


Loading safetensors checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]

Loading safetensors checkpoint shards:  50% Completed | 1/2 [00:01<00:01,  1.73s/it]

Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:03<00:00,  1.79s/it]

Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:03<00:00,  1.78s/it]


Capturing CUDA graph shapes:   0%|          | 0/35 [00:00<?, ?it/s]
Capturing CUDA graph shapes:   3%|2         | 1/35 [00:00<00:24,  1.39it/s]
Capturing CUDA graph shapes:   3%|2         | 1/35 [00:02<01:23,  2.47s/it]
[rank0]: Traceback (most recent call last):
[rank0]:   File "wheel_contents/aiengine/main_generation.py", line 87, in <module>
[rank0]:   File "wheel_contents/aiengine/model.py", line 36, in __init__
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\utils.py", line 1096, in inner
[rank0]:     return fn(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\entrypoints\llm.py", line 243, in __init__
[rank0]:     self.llm_engine = LLMEngine.from_engine_args(
[rank0]:                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\engine\llm_engine.py", line 521, in from_engine_args
[rank0]:     return engine_cls.from_vllm_config(
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\engine\llm_engine.py", line 497, in from_vllm_config
[rank0]:     return cls(
[rank0]:            ^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\engine\llm_engine.py", line 284, in __init__
[rank0]:     self._initialize_kv_caches()
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\engine\llm_engine.py", line 446, in _initialize_kv_caches
[rank0]:     self.model_executor.initialize_cache(num_gpu_blocks, num_cpu_blocks)
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\executor\executor_base.py", line 123, in initialize_cache
[rank0]:     self.collective_rpc("initialize_cache",
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\executor\uniproc_executor.py", line 56, in collective_rpc
[rank0]:     answer = run_method(self.driver_worker, method, args, kwargs)
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\utils.py", line 2359, in run_method
[rank0]:     return func(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\worker\worker.py", line 309, in initialize_cache
[rank0]:     self._warm_up_model()
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\worker\worker.py", line 339, in _warm_up_model
[rank0]:     self.model_runner.capture_model(self.gpu_cache)
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
[rank0]:     return func(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\worker\model_runner.py", line 1585, in capture_model
[rank0]:     graph_runner.capture(**capture_inputs)
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\worker\model_runner.py", line 1954, in capture
[rank0]:     self.model(
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\model_executor\models\qwen2.py", line 462, in forward
[rank0]:     hidden_states = self.model(input_ids, positions, intermediate_tensors,
[rank0]:                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\compilation\decorators.py", line 172, in __call__
[rank0]:     return self.forward(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\model_executor\models\qwen2.py", line 338, in forward
[rank0]:     hidden_states, residual = layer(
[rank0]:                               ^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\model_executor\models\qwen2.py", line 251, in forward
[rank0]:     hidden_states = self.mlp(hidden_states)
[rank0]:                     ^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\model_executor\models\qwen2.py", line 95, in forward
[rank0]:     gate_up, _ = self.gate_up_proj(x)
[rank0]:                  ^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\lora\layers.py", line 570, in forward
[rank0]:     output_parallel = self.apply(input_, bias)
[rank0]:                       ^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\lora\layers.py", line 413, in apply
[rank0]:     self.punica_wrapper.add_lora_linear(output, x, self.lora_a_stacked,
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\lora\punica_wrapper\punica_gpu.py", line 238, in add_lora_linear
[rank0]:     self.add_expand(
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\lora\punica_wrapper\punica_gpu.py", line 142, in add_expand
[rank0]:     lora_expand(
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\_ops.py", line 1123, in __call__
[rank0]:     return self._op(*args, **(kwargs or {}))
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
[rank0]:     return func(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\vllm\lora\ops\triton_ops\lora_expand.py", line 229, in _lora_expand
[rank0]:     _lora_expand_kernel[grid](
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\runtime\jit.py", line 330, in <lambda>
[rank0]:     return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
[rank0]:                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\runtime\jit.py", line 623, in run
[rank0]:     kernel = self.compile(
[rank0]:              ^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\compiler\compiler.py", line 283, in compile
[rank0]:     next_module = compile_ir(module, metadata)
[rank0]:                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\backends\nvidia\compiler.py", line 403, in <lambda>
[rank0]:     stages["llir"] = lambda src, metadata: self.make_llir(src, metadata, options, self.capability)
[rank0]:                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\backends\nvidia\compiler.py", line 325, in make_llir
[rank0]:     llvm.link_extern_libs(llvm_mod, paths)
[rank0]: ValueError: Failed to parse library at C:\ProgramData\BOINC\slots\0\Lib\site-packages\triton\backends\nvidia\lib\libdevice.10.bc
05:59:00 (13916): C:/Windows/system32/cmd.exe exited; CPU time 26.578125
05:59:00 (13916): app exit status: 0x16
05:59:00 (13916): called boinc_finish(195)
0 bytes in 0 Free Blocks.
460 bytes in 8 Normal Blocks.
1144 bytes in 1 CRT Blocks.
0 bytes in 0 Ignore Blocks.
0 bytes in 0 Client Blocks.
Largest number used: 0 bytes.
Total allocations: 1068593 bytes.
Dumping objects ->
{1601455} normal block at 0x000001EF621E4C20, 48 bytes long.
 Data: <PATH=C:\ProgramD> 50 41 54 48 3D 43 3A 5C 50 72 6F 67 72 61 6D 44 
{1601444} normal block at 0x000001EF621E4BB0, 48 bytes long.
 Data: <HOME=C:\ProgramD> 48 4F 4D 45 3D 43 3A 5C 50 72 6F 67 72 61 6D 44 
{1601433} normal block at 0x000001EF621E50F0, 48 bytes long.
 Data: <TMP=C:\ProgramDa> 54 4D 50 3D 43 3A 5C 50 72 6F 67 72 61 6D 44 61 
{1601422} normal block at 0x000001EF621E5080, 48 bytes long.
 Data: <TEMP=C:\ProgramD> 54 45 4D 50 3D 43 3A 5C 50 72 6F 67 72 61 6D 44 
{1601411} normal block at 0x000001EF621E5010, 48 bytes long.
 Data: <TMPDIR=C:\Progra> 54 4D 50 44 49 52 3D 43 3A 5C 50 72 6F 67 72 61 
{1601380} normal block at 0x000001EF63F354C0, 64 bytes long.
 Data: <PATH=C:\ProgramD> 50 41 54 48 3D 43 3A 5C 50 72 6F 67 72 61 6D 44 
{1601369} normal block at 0x000001EF63F15280, 102 bytes long.
 Data: <<project_prefere> 3C 70 72 6F 6A 65 63 74 5F 70 72 65 66 65 72 65 
..\api\boinc_api.cpp(309) : {1601366} normal block at 0x000001EF621E0390, 8 bytes long.
 Data: <   b&#239;   > 00 00 19 62 EF 01 00 00 
{1600628} normal block at 0x000001EF63F14F10, 102 bytes long.
 Data: <<project_prefere> 3C 70 72 6F 6A 65 63 74 5F 70 72 65 66 65 72 65 
{1599912} normal block at 0x000001EF621DFEE0, 8 bytes long.
 Data: <  &#251;c&#239;   > 80 81 FB 63 EF 01 00 00 
..\zip\boinc_zip.cpp(122) : {303} normal block at 0x000001EF621D1F30, 260 bytes long.
 Data: <                > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
{288} normal block at 0x000001EF621D1090, 80 bytes long.
 Data: </c call Scripts\> 2F 63 20 63 61 6C 6C 20 53 63 72 69 70 74 73 5C 
{287} normal block at 0x000001EF621E5A70, 16 bytes long.
 Data: <8j b&#239;           > 38 6A 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{286} normal block at 0x000001EF621E6290, 16 bytes long.
 Data: < j b&#239;           > 10 6A 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{285} normal block at 0x000001EF621E6150, 16 bytes long.
 Data: <&#232;i b&#239;           > E8 69 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{284} normal block at 0x000001EF621E58E0, 16 bytes long.
 Data: <&#192;i b&#239;           > C0 69 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{283} normal block at 0x000001EF621E61A0, 16 bytes long.
 Data: < i b&#239;           > 98 69 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{282} normal block at 0x000001EF621E6240, 16 bytes long.
 Data: <pi b&#239;           > 70 69 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{281} normal block at 0x000001EF621E4E50, 48 bytes long.
 Data: <ComSpec=C:\Windo> 43 6F 6D 53 70 65 63 3D 43 3A 5C 57 69 6E 64 6F 
{280} normal block at 0x000001EF621E5890, 16 bytes long.
 Data: <&#248;  b&#239;           > F8 18 1D 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{279} normal block at 0x000001EF621E16A0, 32 bytes long.
 Data: <SystemRoot=C:\Wi> 53 79 73 74 65 6D 52 6F 6F 74 3D 43 3A 5C 57 69 
{278} normal block at 0x000001EF621E5ED0, 16 bytes long.
 Data: <&#208;  b&#239;           > D0 18 1D 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{276} normal block at 0x000001EF621E6330, 16 bytes long.
 Data: <&#168;  b&#239;           > A8 18 1D 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{275} normal block at 0x000001EF621E6380, 16 bytes long.
 Data: <   b&#239;           > 80 18 1D 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{274} normal block at 0x000001EF621E6060, 16 bytes long.
 Data: <X  b&#239;           > 58 18 1D 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{273} normal block at 0x000001EF621E5840, 16 bytes long.
 Data: <0  b&#239;           > 30 18 1D 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{272} normal block at 0x000001EF621E5B60, 16 bytes long.
 Data: <   b&#239;           > 08 18 1D 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{271} normal block at 0x000001EF621E17C0, 32 bytes long.
 Data: <CUDA_DEVICE=0 PU> 43 55 44 41 5F 44 45 56 49 43 45 3D 30 00 50 55 
{270} normal block at 0x000001EF621E66A0, 16 bytes long.
 Data: <&#224;  b&#239;           > E0 17 1D 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{269} normal block at 0x000001EF621D17E0, 320 bytes long.
 Data: <&#160;f b&#239;   &#192;  b&#239;   > A0 66 1E 62 EF 01 00 00 C0 17 1E 62 EF 01 00 00 
{268} normal block at 0x000001EF621E5CA0, 16 bytes long.
 Data: <Pi b&#239;           > 50 69 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{267} normal block at 0x000001EF621E61F0, 16 bytes long.
 Data: <(i b&#239;           > 28 69 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{266} normal block at 0x000001EF621E0E60, 32 bytes long.
 Data: <C:/Windows/syste> 43 3A 2F 57 69 6E 64 6F 77 73 2F 73 79 73 74 65 
{265} normal block at 0x000001EF621E6510, 16 bytes long.
 Data: < i b&#239;           > 00 69 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{264} normal block at 0x000001EF621E0DA0, 32 bytes long.
 Data: <xjvf input.tar.b> 78 6A 76 66 20 69 6E 70 75 74 2E 74 61 72 2E 62 
{263} normal block at 0x000001EF621E5E80, 16 bytes long.
 Data: <Hh b&#239;           > 48 68 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{262} normal block at 0x000001EF621E59D0, 16 bytes long.
 Data: < h b&#239;           > 20 68 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{261} normal block at 0x000001EF621E57F0, 16 bytes long.
 Data: <&#248;g b&#239;           > F8 67 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{260} normal block at 0x000001EF621E62E0, 16 bytes long.
 Data: <&#208;g b&#239;           > D0 67 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{259} normal block at 0x000001EF621E64C0, 16 bytes long.
 Data: <&#168;g b&#239;           > A8 67 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{258} normal block at 0x000001EF621E5C00, 16 bytes long.
 Data: < g b&#239;           > 80 67 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{256} normal block at 0x000001EF621E5AC0, 16 bytes long.
 Data: < L b&#239;           > 90 4C 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{255} normal block at 0x000001EF621E4C90, 40 bytes long.
 Data: <&#192;Z b&#239;   &#192;T&#243;c&#239;   > C0 5A 1E 62 EF 01 00 00 C0 54 F3 63 EF 01 00 00 
{254} normal block at 0x000001EF621E5F70, 16 bytes long.
 Data: <`g b&#239;           > 60 67 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{253} normal block at 0x000001EF621E57A0, 16 bytes long.
 Data: <8g b&#239;           > 38 67 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{252} normal block at 0x000001EF621E18E0, 32 bytes long.
 Data: <Library/usr/bin/> 4C 69 62 72 61 72 79 2F 75 73 72 2F 62 69 6E 2F 
{251} normal block at 0x000001EF621E5DE0, 16 bytes long.
 Data: < g b&#239;           > 10 67 1E 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{250} normal block at 0x000001EF621E6710, 992 bytes long.
 Data: <&#224;] b&#239;   &#224;  b&#239;   > E0 5D 1E 62 EF 01 00 00 E0 18 1E 62 EF 01 00 00 
{94} normal block at 0x000001EF621E1820, 32 bytes long.
 Data: <windows_x86_64__> 77 69 6E 64 6F 77 73 5F 78 38 36 5F 36 34 5F 5F 
{93} normal block at 0x000001EF621E0160, 16 bytes long.
 Data: <`&#229; b&#239;           > 60 E5 1C 62 EF 01 00 00 00 00 00 00 00 00 00 00 
{92} normal block at 0x000001EF621CE560, 40 bytes long.
 Data: <`  b&#239;      b&#239;   > 60 01 1E 62 EF 01 00 00 20 18 1E 62 EF 01 00 00 
{71} normal block at 0x000001EF621E03E0, 16 bytes long.
 Data: < &#234;}&#170;&#246;           > 80 EA 7D AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{70} normal block at 0x000001EF621DFCB0, 16 bytes long.
 Data: <@&#233;}&#170;&#246;           > 40 E9 7D AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{69} normal block at 0x000001EF621E0930, 16 bytes long.
 Data: <&#248;Wz&#170;&#246;           > F8 57 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{68} normal block at 0x000001EF621E0A70, 16 bytes long.
 Data: <&#216;Wz&#170;&#246;           > D8 57 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{67} normal block at 0x000001EF621E0070, 16 bytes long.
 Data: <P z&#170;&#246;           > 50 04 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{66} normal block at 0x000001EF621DFD00, 16 bytes long.
 Data: <0 z&#170;&#246;           > 30 04 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{65} normal block at 0x000001EF621DFF80, 16 bytes long.
 Data: <&#224; z&#170;&#246;           > E0 02 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{64} normal block at 0x000001EF621E0110, 16 bytes long.
 Data: <  z&#170;&#246;           > 10 04 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{63} normal block at 0x000001EF621E08E0, 16 bytes long.
 Data: <p z&#170;&#246;           > 70 04 7A AA F6 7F 00 00 00 00 00 00 00 00 00 00 
{62} normal block at 0x000001EF621E01B0, 16 bytes long.
 Data: < &#192;x&#170;&#246;           > 18 C0 78 AA F6 7F 00 00 00 00 00 00 00 00 00 00 
Object dump complete.

</stderr_txt>
]]>


©2025 Universitat Pompeu Fabra