Sure, RLE is common enough, and LZ is well-known as well, Huffman and arithmetic coding also works, as well as a large number of other algos.
However, my idea was a bit more general. Compression usually means you have to define a function which given some data turns it into another data (this function will be the decompression function, turning compressed input into decompressed output). Such a function is only useful when the input is shorter than the output. Typically, one does this by defining some type of encoding for data which describes what the output is, usually in some form of patterns (could be patterns which are stored in the dictionary, or built from the input or whatever), and the decompression function will read this encoded data and output the decoded(decompressed) data.
I don't really see why I tried to describe the obvious here, as
>>1 could easily just read up about it online, there is a huge breadth of resources on the topic on the Internet, there are many open source compressors/decompressors, almost all archive formats worth considering are documented, and you usually have open source code to examine if you want to understand the details.