This one-liner will find duplicate files in the current directory and all sub-directories. It uses hash values of the files, so it doesn’t matter if the file names have changed. If the content is the same, the hash will be the same and it will be considered a duplicate.
# find duplicate files # Kenward Bradley 2016-12-29 Get-ChildItem -Recurse | Get-FileHash | Group-Object -Property Hash | Where-Object Count -GT 1 | foreach {$_.Group | select Path, Hash}
Dude, I like it.
This is genius!! Excellent job!
Hi Ken.
I have been converting my old 45’s into mp3’s for some years now (I have a lot!). This has led to Duplicates within the Music Folder under MyPC. I take it that your Script above will do the trick? When you say “the content is the same”, does it matter if a) Version 1 mp3 is a shade longer in length than Version 2 (Duplicate) if the name of each ‘child’ of the Music folder is spelled identically? Naturally, I assume if there is a typo in one of the two, it will be ‘seen’ as a separate file/version. I hope this is a clear explanation.
This one-liner will find files that have the exact same content, ignoring filename. So if the MP3 is slightly longer, it would be considered different by this command.
yes also recommend adding the -litereralpath after get-childitem for those folders with special characters in their names or else it won’t work