25 years ago, the concept of remote work through the internet was groundbreaking. During my time at university, I participated in research focused on the development of distance education systems. We recognized that the "World Wide Web" would provide a tremendous opportunity by reducing the cost of accessing a vast repository of knowledge.
During that period, I had my initial experience with distance learning, engaging with professors located far away in Australia and Spain. I also had my first taste of remote work, collaborating with a client based in the United States.
Today, we witness the widespread access to the internet and modern global software, which erases geographical boundaries and connects professionals seamlessly. Unified fundamental elements and techniques enable us to work from any location worldwide.
Over the years, I have honed my professional skills, actively participating in significant projects and working with a wide range of technologies across various industries. Additionally, I have contributed to numerous smaller projects aimed at solving diverse problems. Having been involved in all stages of the development process, from conceptualization to application management, and from hardware to software, I have acquired strong expertise in development and related challenges.
The field of IT continues to progress rapidly, and the lifelong learning model has long been crucial in this industry. Professionals are constantly required to adapt to a changing world in order to stay competitive.
I graduated from Bauman State Technical University in 2001 with a B.S. degree and in 2003 with an M.S. degree. This university, which is the oldest and largest technical university in Russia, consistently ranks first in both official and business ratings.
My alma mater is renowned worldwide for its engineering education system, known as the "Russian method". This method has been widely adopted in various countries and has influenced the training processes at the Massachusetts Institute of Technology (MIT) and other technical universities in the United States.
My specialization was in Computer Systems, Complexes, and Networks. I completed courses that covered the fundamentals of computer science, including materials physics, circuit design, low-level machine codes, automata theory, translators and compilers, graph and network theories, data structures, databases, and high-level programming languages.
After six years of education, supported by acquired knowledge and extensive practical experience, I embarked on my professional journey in the exciting field of IT.
To further enhance my expertise, I spent an additional three years pursuing in-depth knowledge in Economics through postgraduate studies in Business and Foreign Trade Activities. This learning experience enriched my understanding of economic theory and philosophy and laid the foundation for comprehending organizational structures, communication, goals, risks, and costs. This knowledge continues to have a profound impact on my work in various industries, particularly in business process automation and the development of new applications.
Working on different projects in various companies I have encountered a plethora of techniques and technologies. There is no universally correct technology stack that fits every project. The choice of technology stack depends on various factors, including project goals, customer requirements, time constraints, existing and planned infrastructure, expected performance, developer availability, legacy systems, costs and many more. Each project I have been involved in has provided me with valuable experience and a deeper understanding of the pros and cons of the chosen technologies, their limitations, key features, and management methodologies.
Behind every successful startup with exponential user growth, there are immense efforts from system architects and senior specialists. Undoubtedly, the project management triangle - Time, Quality, and Cost - has the most significant impact on such projects.
Initially, it is crucial to validate the project's idea as quickly as possible by implementing core features within budget and time constraints. Once the main idea is confirmed and the project secures funding, the product needs to be enriched with additional options to cater to the growing number of users and their demands. The development team will require additional specialists to implement necessary functionalities based on the existing software and hardware architecture, resulting in a rapid increase in code volume. Subsequently, the project will face the need for hardware expansion, and in certain cases, geographical hardware distribution may be necessary. At this stage, previously made architecture decisions (which were correct and matched to the old requirements and limitations) may need to be revisited. However, the desire to change the architecture will be opposite to the task of modernizing the existing code, ensuring the growth of product functionality, and maintaining operational stability.
Drawing from my experience in both the B2B and B2C sectors, it is worth noting that there are two main streams of technology exchange between these two sectors. Let's refer to the first stream as "backend." This stream is driven by the R&D efforts of global corporations. The results initially reach the corporate sector, and it takes some time to simplify and reduce maintenance costs, eventually making them available for widespread public use in specific products or as standalone technologies. A prime example of this process is seen in Docker, RabbitMQ, REST, Typescript, Redis, and others. This predictable and lengthy lifecycle process facilitates the search for successful architectural solutions that provide core stability for new products. On the other hand, the counterflow, which I will term "front end," is influenced by the diverse needs of end-users. This stream is driven by market leaders, associations, developer communities, and enthusiasts, such as Enzyme from Airbnb, React from Facebook, Redux, jQuery, Vue, numerous frameworks, bootstrap libraries, transpilers, and an endless list of others. Modern frameworks, tools, and libraries help stay at the forefront of trends while often reducing development costs and providing effective solutions. However, the pursuit of cutting-edge technology is accompanied by challenges like rapid interface/API changes, outdated documentation, compatibility issues, and potential problems with third-party support. One of the main risks in this scenario is the emergence of competing technologies that can diminish funding and support for the chosen technology stack.
The most interesting experience I've gained from working on B2B projects is the exchange of cross-industry knowledge. For instance, learning about the energy industry can be highly beneficial. This industry involves a complex system that handles real-time signals from geographically distributed hardware, enabling the transfer of electricity across the country. Interestingly, this task shares similarities with handling user requests on the World Wide Web. Many interesting patterns of concurrent signal processing, already employed in the electricity industry, can be applied in B2C projects. Similarly, high-end user interfaces from B2C projects can be implemented in the corporate sector to create more appealing interfaces or lightweight API interfaces that can be successfully integrated using REST, thereby reducing costs and deployment time for non-high availability processes.
Another noteworthy experience I'd like to mention relates to custom application development. This activity requires well-organized team management with clearly defined roles. Delving into the customer's business processes compels IT engineers to acquire skills in swiftly estimating infrastructure and legacy systems. This demands a broad knowledge base across various IT domains. It's essential to develop applications within the designated time and budget, while also ensuring good documentation and successful implementation within existing IT and business environments.
Throughout the years, I have participated in projects spanning multiple industries, including banking, telecommunications, construction and design, electricity, oil, light industry, logistics and warehouses, courts, as well as various government organizations.
Development evolution is teaching us to transition from the specific to the general through multilayer abstractions, which helps create sophisticated products with a large number of developers involved. However, high-load projects follow a different path. To achieve maximum performance, it is necessary to move from the general to the specific.
It took me a considerable amount of time on research and development, code refactoring, and profiling. I learned to pay attention to the physical structures of data storage, process compiler-specific code, and "to hurry slow". The latter is particularly important when working on, for example, a countrywide unified payment system that serves millions of money transfers, as a small mistake can result in significant losses.
The operational stability of high-load projects is as important as performance. This necessitates being cautious with third-party updates, preparing comprehensive test coverage with staging environments, and studying the impact of different parts of the code on each other.
Lastly, I want to mention that high-load (sub)projects provide an excellent opportunity to enhance professionalism and cultivate a thirst for new knowledge.
PHP, PEAR, CSS
.Net, JScript, C#, MSSQL (T-SQL), Sharepoint
IBM WebSphere, ESB, SOA, MQ, DB2
SOAP, WSDL, REST, Webservices, Microservices
Docker, Kubernetes, Symfony
AWS, Google Cloud Platform
continue to study innovations
Please fill the form below. I will contact you as soon as possible.