The democratization of AI and access to code language models is a pivotal goals in the field of artificial intelligence. Large Language Models (LLMs) have shown exceptional capabilities in code intelligence tasks, but with a high computational costs. This paper addresses these challenges by presenting a comprehensive approach to scaling down Code Intelligence LLMs. We focus on training smaller code language models, which lowers the computation cost of inference and training. We extend these models to diverse programming languages, enabling code completion tasks across various domains.