This commit is contained in:
Markus Brueckner 2024-12-10 20:21:07 +01:00
parent 9562e5024b
commit 5f58d7b95b
4 changed files with 170 additions and 0 deletions

7
2024/day9/Cargo.lock generated Normal file
View file

@ -0,0 +1,7 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 3
[[package]]
name = "day9"
version = "0.1.0"

6
2024/day9/Cargo.toml Normal file
View file

@ -0,0 +1,6 @@
[package]
name = "day9"
version = "0.1.0"
edition = "2021"
[dependencies]

31
2024/day9/README.md Normal file
View file

@ -0,0 +1,31 @@
# Solution Day 9
Disk defragmentation... That's a thing I haven't seen in a long while. That brings back memories of sitting in front of a text
representation of my hard disk, seeing the little squares jump back and forth when the defragmentation program started to cluster related blocks together...
The fragmentation here is somewhat simpler, than the real ones back then and the stakes are also somewhat lower. No risk
of screwing up data, when you accidentally crash.
## Task 1
The approach is rather simple: run two pointers from the front and the back of the disk. Whenever the back pointer encounters an occupied block,
run the front pointer until you find an empty block, copy over the contents from the back end remove them there. Stop when the two
pointers meet in the middle. Afterwards calculate the checksum.
## Task 2
A bit more involved. My approach searches a file block from the back, counts its length, searches an empty block from the front
to find enough free space and moves the data over. Initially I managed to have an endless loop whenever a block could not be copied
at all, because the back pointer searches for the next non-free block. Since some files couldn't be moved out of the way, the back
pointer would immediately find the same file again and try to copy it, failing again and the whole cycle started over. This is
easily solved by the
```rust
else {
movable_file_idx -= 1; // step before the file that we were unable to move
}
```
block in line 106ff, which steps just _before_ the file we looked at last. This time we stop when we reach file 0, because this
cannot ever be moved (on account of being at the start of the disk already).
This approach has complexity _O(n²)_, which can probably be massively improved (e.g. by finding all empty blocks in one pass,
recording their positions and sizes in a `HashMap` and then using the _O(1)_ lookups to find a suitable free block, when making the second
pass to find the actual file blocks. This would bring the complexity to _O(n)_, if I'm not mistaken.)

126
2024/day9/src/main.rs Normal file
View file

@ -0,0 +1,126 @@
use std::usize;
type BlockId = usize;
type Disk = Vec<Option<BlockId>>;
fn load_input() -> Disk {
let input = std::fs::read_to_string("input.txt").expect("Should be able to read input file");
// let input = "2333133121414131402";
input
.chars()
.enumerate()
.flat_map(|(idx, ch)| {
let is_empty = idx % 2 == 1;
if let Some(num) = ch.to_digit(10) {
let id = idx / 2;
vec![if is_empty { None } else { Some(id) }; num as usize]
} else {
vec![] // ignoring invalid characters in the input (at the end)
}
})
.collect()
}
fn print_disk(d: &Disk) {
for ch in d {
if let Some(ch) = ch {
print!("{}", ch);
} else {
print!(".");
}
}
println!("");
}
fn task1() {
let mut disk = load_input();
// print_disk(&disk);
let mut free_idx = 0;
let mut last_block_idx = disk.len() - 1;
while free_idx < last_block_idx {
// find the first free block
while free_idx < last_block_idx && disk[free_idx].is_some() {
free_idx += 1;
}
// find the last used block
while last_block_idx > free_idx && disk[last_block_idx].is_none() {
last_block_idx -= 1;
}
// found something to swap
if free_idx < last_block_idx {
disk[free_idx] = disk[last_block_idx];
disk[last_block_idx] = None;
}
}
// calculate the checksum
let checksum = disk.iter().enumerate().fold(0, |checksum, (idx, block)| {
if let Some(value) = block {
checksum + idx * value
} else {
checksum
}
});
println!("Task 1: checksum: {checksum}");
}
fn task2() {
let mut disk = load_input();
let mut movable_file_idx = disk.len() - 1;
let mut current_file_id;
while movable_file_idx > 0 {
// find the end of the next file
while movable_file_idx > 0 && disk[movable_file_idx].is_none() {
movable_file_idx -= 1;
}
current_file_id = disk[movable_file_idx];
// find the start of the file
let mut file_length = 0;
while movable_file_idx > 0 && disk[movable_file_idx] == current_file_id {
movable_file_idx -= 1;
file_length += 1;
}
movable_file_idx += 1; // one step back because the while loop move _past_ the start of the file
// find block from the start, that is big enough for the file
let target_idx = 'space_search: {
for idx in 0..movable_file_idx {
if (0..file_length).all(|target_offset| disk[idx + target_offset].is_none()) {
break 'space_search idx;
}
}
movable_file_idx
};
if target_idx < movable_file_idx {
// found a block, move the file
for offset in 0..file_length {
disk[target_idx + offset] = disk[movable_file_idx + offset];
disk[movable_file_idx + offset] = None;
}
} else {
movable_file_idx -= 1; // step before the file that we were unable to move
}
}
// calculate the checksum
let checksum = disk.iter().enumerate().fold(0, |checksum, (idx, block)| {
if let Some(value) = block {
checksum + idx * value
} else {
checksum
}
});
println!("Task 2: checksum: {checksum}");
}
fn main() {
task1();
task2();
}