Q: The intelligent machine L7 used to be positioning and driving control, but now it has turned to the intelligent direction. Will there be ambiguity in positioning?
A: Actually, it's not a change of direction at all. Looking back at the three initial intentions of the smart meter project, they are:
First, compared with fuel vehicles, electrification has brought us a brand-new driving, control and upgrading experience;
Second, the depth of intelligence makes the intelligent scene level break through all the original imagination;
Thirdly, with the per capita GDP exceeding 1 10,000 USD, the consumption upgrading and aesthetic upgrading of various consumer industries in China have undergone significant changes, and the automobile industry is no exception. China consumers' demand for consumption upgrading and aesthetic upgrading will be fully demonstrated in the market performance in the past two years.
Electrification and intelligence are two indispensable aspects of a good intelligent electric vehicle. Zhiji has proved the product positioning of the new world flagship in tianmashan, but deep intelligence must be the other side of a good intelligent electric vehicle, and there is no conflict between them.
With the development of the times, the opening and integration of intelligent electric vehicles in terms of power and intelligence will also bring new experiences. The super-running mode is particularly powerful and maneuverable. At the same time, it is necessary to provide users with auditory, tactile, somatosensory and even some olfactory experiences in the intelligent cabin on the basis of power drive and control, and integrate them into the intelligent immersive experience.
Electrification and intelligence are not contradictory. What Zhiji hopes to do is to bring users a brand-new and top-level experience in two directions at the same time.
Q: The configuration and function of intelligent driving demonstrated by Zhizao Automobile, why are the five standards of intelligent automobile proposed by Zhizao Automobile considered from these five aspects? Have you considered these configurations in other ways before?
A: The five standards of smart cars will become a very clear direction for the development of smart electric vehicles in the future, with a wide range of intelligent thinking and imagination and diverse functional performances.
The first is intelligent driving, which is an underlying logic. Intelligent driving is the most revolutionary subversion of car experience since the invention of gasoline car or car 136, and its weight is the heaviest in the five dimensions. It is the most fundamental, core and symbolic change that distinguishes intelligent electric vehicles from traditional vehicles, and it is very subversive and symbolic in experience.
It is very important to liberate the energy and consumption that traditional car users spend on driving to the maximum extent through data-driven algorithm and closed-loop automated flywheel iteration.
Under this premise, in the consideration of cloud and end integration, the deep intelligence of the whole vehicle will bring a new experience, and the scene intelligence and car mobile socialization will also follow. Starting from L7, Smart Car will have very strong scene intelligence and continuous evolution ability, providing consumers with more imagination space for intelligent scene experience. As a super-large intelligent terminal, the car has far surpassed the smart phone in terms of computing power, storage capacity, communication capacity and endurance, which makes the smart car naturally have the product technology and foundation of immersive experience. Data-driven can be more human-like than IMAD. The distribution of data packets can make us enjoy more experiences, optimize the functions of more valuable data through data factories, and then feed them back to users, and exchange them through data permission plans to achieve continuous upgrading.
The process of driving is a relative social blank or blind spot. The social demand in this process will become a very important appeal of smart cars, and cars are the perfect carrier for deep socialization in the 5G era.
In terms of continuous evolution ability, in the process of building the original architecture of intelligent machine L7 and iO, we reserved enough bandwidth for their software and hardware. Our software can evolve, and the hardware has enough room for upgrading. At the same time, we are exploring the possibility of upgrading other hardware. In the future era of intelligent electric vehicles, technology iteration is fast, and hardware and software must be upgraded together to bring the best experience.
Q: At present, L4 is still an unattainable peak in the field of autonomous driving. What is the development route of smart meter in L2-L3 stage?
A: The L4-class advanced autonomous driving is coming soon, and SAIC and Smart Machine have forward-looking technical reserves of related technologies. At present, we use the algorithm technology from L4 automatic driving training to empower Zhiji IM AD, so as to realize the L2.99 assisted driving function.
You can experience 16 basic assisted driving after delivery. On June 18, users can experience the driving assistance function with a high starting point without waiting for a long time. Moreover, the basic auxiliary functions have already had the amazing performance of the first echelon. It can be called "delivery is experience" and "debut is amazing";
It is expected that from the third quarter of this year, high-order intelligent driving functions such as NOA, high-speed elevated scenes, high-precision automatic parking and trust enhancement will be gradually pushed;
We have put more thoughts into the intelligent driving experience that is "more human-like". For example, enhancing trust can establish in-depth communication between people and vehicles, clearly perceive functional boundaries, and know when to take over, instead of creating fear. We will provide this function through OTA. There are also some highlights in the detail experience, such as the unique application scenario of Zhiji L7, WLC high-precision automatic parking, and the integration of wireless charging functions. Through the visual ability and APA ability, one-button high-precision parking alignment of optional wireless charging models can be realized, and the power ability of wireless charging can be fully exerted, which is more considered by users; We will also make full use of the 39-inch ultra-wide scene horizontal screen to create a practical navigation mode of "air navigation" and create an immersive experience, which will be provided by OTA in the future. "More human-like" has created the differentiation of intelligent machine iAd and provided users with a brand-new experience.
In the future, with the continuous evolution ability brought by the data factory on the cloud and scalable intelligent driving hardware, the ultimate goal of IM AD is to enter the first echelon of global intelligent driving.
From the measured performance of many media, it can be seen that IM AD can perfectly cope with the highest frequency traffic jam scene and the most dangerous abnormal car scene, and the performance of "more like a person" completely surpasses Tesla. Even compared with Tesla's strongest cornering ability, IM AD has achieved comparable results. Regarding automatic parking, many media teachers and users have experienced the strength of the current state. Many times, it can be done almost in one go, very crisp and neat. At present, vertical parking is supported, and the parking of other types of parking spaces is still being polished, and will soon be pushed through OTA.
Q: Apple has just announced its new CarPlay system, which is more involved in automobile data. How does Zhiji consider the intelligent cockpit system?
A: CarPlay is very popular with BBA users. What we need to do is deep intelligence. All functions, visual interaction and information display of Carplay follow the existing settings. Based on SOA atomization architecture and leap-forward integration, the intelligent cockpit of intelligent machine L7 realized the experience evolution from functional intelligence to scene intelligence.
For example, the air navigation mode should realize the deep integration of intelligent cabin, chassis control, automatic driving, vision, radar and chassis control; The intelligent speed bump mode, the old man mode and the mountain road mode provided by the intelligent chassis need to get through the deep functions of the car according to the user's personalized experience, and make a humanized interactive display through the intelligent cabin. We advocate deep intelligence. Through the user's data and the user's understanding of the real scene, all the functions that smart cars can provide and the capabilities of SOA can really vary from user to user and from scene to scene. From the bottom logic, this is difficult for CarPlay to do.
Q: How did Zhi Zhi do it in the evolution from functional intelligence to scene intelligence? Will there be more such scenes in the future?
A: No matter from the architecture of Intelligence Set, SOA software architecture, or the planning of super data factory, and the iterative ability of closed-loop automation we have, the goal is to expand the imagination space indefinitely for the future, so as to ensure that we can really expand this imagination space and functional space according to the needs of users for a long time.
A simple metaphor is "accommodation". Our China culture is also quite strong. The word "accommodation" has many different usage scenarios. If accommodation is used in smart cars, I feel that it is two different dimensions. First of all, we must "communicate". SOA is the function of getting through the bottom ECU. By atomization technology, we can get through these functions first, which brings us the ability to combine randomly and flexibly according to the scene. However, if we say "integration", it is another stage. If the function is divided, it is difficult for you to play. If the domains are relatively separated, it is difficult for you to really integrate the chassis domain and the smart cabin domain, and this integration is deep integration, which can produce a very coherent and deep intelligent experience. This is a source of our thinking.
First of all, we should "communicate" from the technology and "integrate" from the products, and really integrate them together, so as to produce functions that users can't expect, but at the same time, which are particularly needed by users, truly break through the functional barriers between controllers, and provide users with the functions they particularly need through cross-domain integration of various domain functions.
Traditional cars used to consider how to provide users with functions from the perspective of brands and OEMs, but users may not really need them. Moreover, we can now better understand our users through the ability of "integration", even through the way of data, so that the function can be related to the user's understanding, that is, we can understand and even predict what function the user needs in which scenario, and customize a function you just need through the ability of SOA and push it to you at that exact time. This is the "communication" and "fusion" of scene intelligence. If it is regarded as martial arts, "communication" is the foundation and "integration" is a higher level of ability.
Q: Yesterday, we mentioned that we were the first intelligent hardware upgrade in the industry. What's the difference between this upgrade and FOTA?
A: The upgrade of intelligent hardware is completely different from FOTA. Relatively speaking, FOTA's hardware itself will not change for a period of time or during your car cycle. More is to brush the hardware driver and its software through FOTA's ability, and make its ability more suitable for our intelligent needs by brushing. The intelligent hardware upgrade we advocate is a real hardware ontology upgrade.
At yesterday's press conference, it was particularly emphasized that Zhiji L7 currently provides a set of IMAD intelligent driving system with strong visual integration of NVIDIA's Xavier, and this set of lidar +Orin system was developed almost simultaneously. Once the conditions are ripe, we will replace this hardware for the promised users in an orderly manner.
The roof of Zhiji L7 will have two states. If you choose this model of binaural reserved lidar, you can replace this part in the original factory in the future. After the replacement, in terms of hardware capabilities, you will have the ability to upgrade the hardware lidar twice, and at the same time, you can reach the L7 car through hardware in an invisible place in the car.
Logically speaking, the platform of hardware capability will determine the upper limit of a function, and we will make the ceiling of capability expand forever through the upgrading of hardware and the continuous iteration of software. At the same time, we are actively exploring hardware fields such as screens to further broaden and upgrade the ceiling of the intelligent level of automobiles.
Q: It was amazing to see Carlog make his first public appearance at yesterday's press conference. About Carlog, how did Zhiji consider it in the early stage of development? Why do you want to put this function on the intelligent machine L7?
A: carlog, I am the first product manager. Because the project Carlog is so challenging, there are many short stories in it, so I will share one or two of them:
1, our starting point is that in daily life, mobile phones have become an important carrier for recording and sharing life. No matter walking, taking a bus, taking a high-speed train or flying, you can easily record it with your mobile phone. Only the car, when you see a very beautiful scenery, whether it's Panshan Highway or the fallen phoenix tree on Huaihai Road, you want to drive gracefully to take pictures of this scene and share it with your friends and circle of friends. This function has been difficult to realize in the car before. Either you shoot with your mobile phone, it may be raining outside the window, not very clear. You have to shoot while driving, which is particularly unsafe. This demand is very needed in the car during that time, especially through the understanding of the deep demand in the future 5G social era. We don't think that two or three hours in the car should be a social blind spot.
2. At that time, Carlog, when we made this system, the process was really painful. Why? Because our automobile industry is very interesting, there are many safety requirements, especially functional safety requirements. In fact, the performance requirements for hardware and components are very high, such as camera system. At that time, we went to many camera companies in the automobile industry. About two years ago, the car camera didn't seem to have more than 2 million pixels, so almost all of us used this scheme. Later, we found that it was expanded to 5-8 million (pixel camera).
But as we all know, the iphone we use today, although the single camera capability has not improved much, has always insisted on 1 2 million, only1,2,3. However, from the perspective of other mobile phones, Xiaomi and Huawei are both 40 million, and mobile phones achieve 1 100 million pixels. So we feel a huge gap, as if the camera in the car is very different from the camera used in consumer electronics.
So, we went to find the cameras of many mobile phone manufacturers. Although the pixel is high enough, it can't meet the requirements of vehicle safety, durability, temperature and vibration. Later, we also had a very in-depth exchange with Yu Shun Optics, the strongest in the world. We didn't simply buy the existing camera system, but really set up a very in-depth project team. Based on their understanding of camera, imaging and DSP, as well as some standards and requirements for the whole vehicle, Zhiji made a very in-depth exchange and integration.
Therefore, we only had a chance to get a very good solution later. Three 48 million pixels add up to more than 1 100 million pixels. Through very good image AI stitching, 180 degree ultra-wide camera shooting can be realized. Because there is a big difference between the car perspective and the mobile phone perspective, the ultra-wide perspective is more suitable for shooting beautiful street scenes and natural scenery, which we think is indispensable.
We are very excited after getting this scheme, at least from the perspective of camera perception and recording, but we have spent a lot of development and polishing, as well as a lot of effort to integrate it into the car and maintain stable and normal work in the environment of high speed, vibration, high temperature and high humidity. I think through this challenging project, the ability and fighting spirit of our riders have also been greatly exercised.
Frankly speaking, we also tore up many schemes in this process, which made Carlog so amazing today. I am deeply involved, especially our core Carlog team, which is a group of very passionate young people. They have made a lot of efforts and suffered a lot in the iterative process of the scheme, because some schemes may be reset in this process, and we need to find a better and newer scheme. Moreover, from the bottom of the camera to the hardware layer of the mechanical car, Carlog is developing forward, including software, AI editing and video splicing. This matter has a very good demonstration for the whole project team and even the whole Zhiji company.
Finally, one of the biggest highlights of Carlog is to fill the "record blank" during your driving and give the car the ability to record a beautiful life and scenery along the way gracefully. I still think that Carlog's ability and its social value are still underestimated. In the era of 5G intelligence, from the perspective of real-time video, recording life and sharing the journey, I think Carlog will definitely explode with greater value in the future. I also hope that teachers can help us promote the development of Carlog.
Q: Zhihu attaches great importance to users' rights and interests, including the previous plan to launch users' data rights and interests in Yuanshi Valley. How to treat the concept of co-creation of many users who are being fired now?
A: We cannot overemphasize any general trend of historical development. We have been preparing the copyright project of user data in Yuanshigu since nearly three years ago. We find that the era of artificial intelligence is fundamentally different from the previous industrial revolution and social wave. Unlike previously invented energy sources, products or new tools to improve productivity, such as internal combustion engines and electric energy, artificial intelligence invented a kind of wisdom.
At present, this kind of wisdom can theoretically cooperate with human wisdom. In the future, the development of AI may surpass human beings in many aspects and may produce its own semi-autonomous consciousness. Of course, this is a philosophical proposition. Human wisdom and artificial intelligence have their own strengths, and together they produce a kind of wisdom to enhance the dimension. Theoretically, all products can be reinvented.
The future progress of brands and products depends on users and the data generated during their use. Therefore, based on the interpretation of the logic of this era, we planned the CSOP primitive valley user data rights plan very early. Through 300 million "original stones", we will give back the contribution of user data in the form of data rights corresponding to the growth rights and interests of the founding value of smart machine cars, and create a dividend for sharing the development of the times with users.
I think that in the era of artificial intelligence, as long as it is intelligent or quasi-intelligent products, users' data rights should be seriously considered, which is an inevitable and unavoidable topic.
In the agitation of reality and virtual world, smart car users will gather in the original stone valley, open the original stone, accumulate points, upgrade equipment, evolve vehicles and so on. Friends of the user operation team have created and planned many interesting and in-depth gameplay for us. I believe that car owners can interact and grow up happily in the world of the original stone valley. We will also work with users to build a diverse and co-created value community and spiritual home.
Zhiji automobile yuanshigu