Sunday, June 8, 2025

DISTCC: client did not provide distcc magic fairy dust

 I get these errors from distccd (in detach mode) everytime distcc client wants to distribute some jobs to do:


istccd[14791] (dcc_create_kids) up to 1 children
distccd[14791] (dcc_create_kids) up to 2 children
distccd[14791] (dcc_create_kids) up to 3 children
distccd[14791] (dcc_create_kids) up to 4 children
distccd[14791] (dcc_create_kids) up to 5 children
distccd[14791] (dcc_create_kids) up to 6 children
distccd[14791] (dcc_create_kids) up to 7 children
distccd[14791] (dcc_create_kids) up to 8 children
distccd[14791] (dcc_create_kids) up to 9 children
distccd[14791] (dcc_create_kids) up to 10 children
distccd[14791] (dcc_create_kids) up to 11 children
distccd[14791] (dcc_create_kids) up to 12 children
distccd[14791] (dcc_create_kids) up to 13 children
distccd[14791] (dcc_create_kids) up to 14 children
distccd[14792] (dcc_check_client) connection from 192.168.100.218:36956
distccd[14792] (check_address_inet) match client 0xda64a8c0, value 0x64a8c0, mask 0xffffff
distccd[14792] (dcc_readx) ERROR: unexpected eof on fd4
distccd[14792] (dcc_r_token_int) ERROR: read failed while waiting for token "DIST"
distccd[14792] (dcc_r_request_header) ERROR: client did not provide distcc magic fairy dust
distccd[14792] (dcc_cleanup_tempfiles_inner) deleted 3 temporary files
distccd[14792] (dcc_job_summary) client: 192.168.100.218:36956 OTHER exit:0 sig:0 core:0 ret:108 time:0ms 
distccd[14793] (dcc_check_client) connection from 192.168.100.218:40390
distccd[14793] (check_address_inet) match client 0xda64a8c0, value 0x64a8c0, mask 0xffffff
distccd[14793] (dcc_readx) ERROR: unexpected eof on fd4
distccd[14793] (dcc_r_token_int) ERROR: read failed while waiting for token "DIST"
distccd[14793] (dcc_r_request_header) ERROR: client did not provide distcc magic fairy dust
distccd[14793] (dcc_cleanup_tempfiles_inner) deleted 3 temporary files
distccd[14793] (dcc_job_summary) client: 192.168.100.218:40390 OTHER exit:0 sig:0 core:0 ret:108 time:0ms 


From distcc source code:

/**
 * Read a token and value.  The receiver always knows what token name
 * is expected next -- indeed the names are really only there as a
 * sanity check and to aid debugging.
 *
 * @param ifd      fd to read from
 * @param expected 4-char token that is expected to come in next
 * @param val      receives the parameter value
 **/
int dcc_r_token_int(int ifd, const char *expected, unsigned *val)
{
    char buf[13], *bum;
    int ret;
    if (strlen(expected) != 4) {
        rs_log_error("expected token \"%s\" seems wrong", expected);
        return EXIT_PROTOCOL_ERROR;
    }
    if ((ret = dcc_readx(ifd, buf, 12))) {
        rs_log_error("read failed while waiting for token \"%s\"",
                    expected);
        return ret;
    }


Monday, February 3, 2025

Python script to find duplicate files

Here is a Python script that scans directories for duplicate files by comparing file sizes and MD5 hashes:


import os
import hashlib
from collections import defaultdict
import argparse
def get_file_hash(filepath):
    """Calculate the MD5 hash of a file's content."""
    hasher = hashlib.md5()
    try:
        with open(filepath, 'rb') as f:
            while True:
                chunk = f.read(8192)  # Read in 8KB chunks to handle large files
                if not chunk:
                    break
                hasher.update(chunk)
    except IOError:
        return None  # Skip files that can't be read
    return hasher.hexdigest()
def find_duplicates(start_directory):
    """Find duplicate files in the specified directory and its subdirectories."""
    file_sizes = defaultdict(list)
    
    # First pass: Group files by size
    for root, dirs, files in os.walk(start_directory):
        for filename in files:
            filepath = os.path.join(root, filename)
            try:
                file_size = os.path.getsize(filepath)
            except OSError:
                continue  # Skip inaccessible files
            file_sizes[file_size].append(filepath)
    
    # Second pass: Compare hashes of files with the same size
    duplicates = []
    for size, paths in file_sizes.items():
        if len(paths) < 2:
            continue  # Skip unique file sizes
        
        hashes = defaultdict(list)
        for path in paths:
            file_hash = get_file_hash(path)
            if file_hash is not None:
                hashes[file_hash].append(path)
        
        # Collect all groups of identical files
        for hash_group in hashes.values():
            if len(hash_group) > 1:
                duplicates.append(hash_group)
    
    return duplicates
if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Find duplicate files in a directory tree.')
    parser.add_argument('directory', help='Directory to scan for duplicates')
    args = parser.parse_args()
    if not os.path.isdir(args.directory):
        print(f"Error: '{args.directory}' is not a valid directory")
        exit(1)
    duplicate_groups = find_duplicates(args.directory)
    if duplicate_groups:
        print("\nDuplicate files found:")
        for i, group in enumerate(duplicate_groups, 1):
            print(f"\nGroup {i}:")
            for path in group:
                print(f"  {path}")
        print(f"\nFound {len(duplicate_groups)} groups of duplicates")
    else:
        print("\nNo duplicate files found")


Key features:

1. Uses two-pass comparison for efficiency:

   - First groups files by size

   - Then compares MD5 hashes of same-size files

2. Handles large files using chunked reading

3. Skips inaccessible files gracefully

4. Provides clear command-line output

5. Uses MD5 hashing for content comparison


To use:

1. Save as `find_duplicates.py`

2. Run with: `python find_duplicates.py /path/to/directory`


The script will:

1. Scan all subdirectories recursively

2. Identify files with identical content

3. Group duplicates together in the output

4. Show full paths of duplicate files


Note: MD5 is used for speed, but you could modify the script to use SHA-256 for cryptographic-strength hashing by replacing `hashlib.md5()` with `hashlib.sha256()`.

Sunday, December 15, 2024

Some bit manipulations

 This C++ code has bits access routines to get and set some bits:


dataBits.h:




#include <cstdint>
#include <iostream>

template <typename T>
class DataBits
{
public:
DataBits(T v=0) : m_data(v) {};
        
        T get() const { return m_data; }

        void set(T v) { m_data = v; }

        // 0-based
        template <typename TB>
static constexpr TB BitMask(int startpos, int endpos)
{
return TB((1<<(endpos-startpos+1))-1) << startpos;
        }

        constexpr T GetBits(int startpos, int endpos)
        {
            return (m_data & BitMask<T>(startpos, endpos)) >> startpos;
        }

        void SetBits(int startpos, int endpos, T newData)
        {
            auto mask = BitMask<T>(startpos, endpos);
            m_data = (m_data & ~mask) | ((newData << startpos) & mask);
        }

private:
    T  m_data;

};



main.cpp:

#include <iostream>
#include <format>
#include <bitset>
#include <iomanip>
#include "dataBits.h"


int main()
{
    std::cout << std::format("{:#04x}", 0x0e) << std::endl;
    std::cout << "BitMask[1..3] = " << std::hex << "0x" << DataBits<uint32_t>::BitMask<uint32_t>(1,3) << std::endl;

    DataBits bits = 0b1110101001010011;
    std::cout << "bits = " << std::setw(55) << std::bitset<32>(bits.get()) << std::endl;
    auto newbits = 0b010;
    bits.SetBits(13, 15, newbits);
    std::cout << "bits with bits[13..15]=" << std::bitset<3>(newbits) << " -> " << std::bitset<32>(bits.get()) << std::endl;
    std::cout << "BitMask[1..3] = " << std::hex << "0x" << DataBits<uint32_t>::BitMask<uint32_t>(1,3) << std::endl;
    std::cout << "data @bits[13..15] = " << std::bitset<3>(bits.GetBits(13,15)) << std::endl;
    return 0;
}