How to shred
free disk slots?
#22
Loading…
x
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Tracked at Benjamin_Loison/shred/issues/5.
Related to the issue:
Related to Benjamin-Loison/vim/issues/14.
The Super User answer 19377:
Output:
Output:
Output:
Output:
Related to Benjamin_Loison/coreutils/issues/1.
On my Linux Mint 22 Cinnamon Framework 13:
Output:
Output:
Output:
Unclear if when will reach 117G it will be finished or it will only be the first pass.
So it took about 10 minutes for 15 GB.
Still same output of
time sudo sfill -v /; echo $?; matrix-commander -m 'sfill finished!'
.Output:
Note that after a few minutes it was still having more GB. Maybe
df -h /
is not correct.So how is it able to get additional GB??
So for avoiding possible data erasure I ctrl + c:
Output:
Should test in an environment where I am fine loosing all data.
Could backup a Linux virtual machine with lowest space at all (not free) and proceed with it.
Given maximum size, estimating when file writing will be finished would be interesting.
Pegasus for this long running task seems more appropriate.
does not return anything.
Output:
Output:
Related to Benjamin_Loison/pv/issues/2.
Output:
Output:
*
seems to have been added. Maybe it's the first pass, hence is very slow.Output:
so it seems to be a
*
for each pass.Output:
Output:
Cannot use encryption to make it faster? Maybe not retrospectively.
Would help Benjamin_Loison/ecryptfs/issues/2.
Related to Benjamin_Loison/shred/issues/12.
Related to Benjamin_Loison/shred/issues/1#issuecomment-2557852.
is unclear what is works on.
Should test getting file back with testdisk.
Output:
not finished.
Output:
not finished.
So takes about 2 minutes per GB, so 2,000 minutes per TB that is 33 hours.
So default secure mode with 38 passes would take about 33 * 38 = 1,254 hours that is 52 days...
shred
makes 3 passes, so would take about 100 hours, that is 4 days, per TB.Output:
Output:
Output:
Source: the Super User answer 706129
is not much meaningful to me.
Output:
Output:
Output:
DuckDuckGo and Google search
"sfill" while "df" shows 0
.The Ask Ubuntu question 961558 is focused on getting progress.
Output:
Related to Benjamin-Loison/cinnamon/issues/179.
Related to Benjamin-Loison/cinnamon/issues/180.
Output:
Does it take into account blocks only accessible by
root
(it is defined per partition if I remember correctly)? Maybe as needsudo
to use/
.Related to Benjamin-Loison/cinnamon/issues/137.
Related to Benjamin_Loison/bash/issues/14.
Can create files when
df -h /
claims not having space anymore? It would be nice if could during the process to still be able to have manual low activity in parallel. In theory if only read from disk and use RAM it is fine.on the Linux Mint 22 Cinnamon owned by the person:
Before:
After:
How this process have freed 11 GB according to
df -h /
?So it took about 12 hours to erase 388 GB, so it erases about 32 GB per hour. So it needs about 31 hours to erase 1 TB.
Firefox YouTube and KeePassXC unlock work fine after reboot.
However, then have to keep in mind that notably:
/tmp/
should investigate that, see issues/58#issuecomment-3106/root/
/var/www/html/
/file.swap
/var/spool/cron/crontabs/
are still in plaintext. As well as temporary operations like zipping?
Related to Benjamin_Loison/ecryptfs/issues/3.
I verified quickly the contents of:
/root/
/var/www/html/
crontab -l
on the given person laptop.
Maybe with
sfill
can precise/folder/
as/
if need to shred now deleted files that were in/folder/
.