You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In hdfs/fs.py we cache fs instances by (IP, port, user). The purpose here is twofold: save time by reusing an existing connection and avoid situations where two different Python objects map to the same Java instance (this can lead to crashes if the currently used instance is closed unexpectedly somewhere else). What the cache does not do is guarantee that different cache entries map to different name nodes, since a given name node can have multiple IPs (e.g., a NIC and the loopback interface).
test_hdfs_fs.cache incorrectly checks that connections to "default" and to the explicit HDFS_HOST value map to the same cache item, while this can easily fail. For instance, on EC2 I currently have the following setup:
$ grep -A1 defaultFS /opt/hadoop/etc/hadoop/core-site.xml
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
python -c "import os, socket; print([socket.gethostbyname(_) for _ in ('localhost', os.getenv('HOSTNAME'))])"
['127.0.0.1', '172.31.18.195']
The text was updated successfully, but these errors were encountered:
In
hdfs/fs.py
we cache fs instances by(IP, port, user)
. The purpose here is twofold: save time by reusing an existing connection and avoid situations where two different Python objects map to the same Java instance (this can lead to crashes if the currently used instance is closed unexpectedly somewhere else). What the cache does not do is guarantee that different cache entries map to different name nodes, since a given name node can have multiple IPs (e.g., a NIC and the loopback interface).In
test_utils.py
we have:test_hdfs_fs.cache
incorrectly checks that connections to "default" and to the explicitHDFS_HOST
value map to the same cache item, while this can easily fail. For instance, on EC2 I currently have the following setup:The text was updated successfully, but these errors were encountered: