Communication between Android components

Communication between Android components

First, let’s sort out the ways in which we communicate between different components in Andrew.

(Tips: below, in addition to file storage and ContentProvider, generally refers to communication within the same process. If you want to achieve cross process communication, it also needs the help of Messenger or AIDL technology, followed by a detailed introduction of time, temporarily not discussed).

Mode 1: use Intent to pass the value: (between Activity and Activity)

Value example:

Intent intent=new Intent (); intent.putExtra (“extra”, “Activity1”); intent.setClass (Activity1.this, Activity2.class); startActivity (intent);

Value example:

Intent intent=getIntent (); String data=intent.getStringExtra (“extra”); TextView tv_data= (TextView) findViewById (R.id.tv_data); tv_data.setText (data);

Mode two: use Binder to transmit values (between Activity and Service).

1. define Service

In the Service, define an internal class that inherits from Binder, passing this class, passing the object of the Service to the required Activity, so that the Activity can call the public method and property in the Service, as follows:

Public class MyService extends Service {/ / instantiate the Binder class that you define.

2.Activity binding Service

It is to get the MyService object through the getService of IBinder, and then call the Public method. The code is as follows:

Public class MyBindingActivity extends Activity {

Way three: use Broadcast broadcast transmission value

In fact, it uses Broadcast’s sending and receiving to realize communication.

Send an instance of Broadcast:

Static final String ACTION_BROAD_TEST = “com.my.broad.test”; / / / / / / / / send Intent mIntent = new Intent (ACTION_BROAD_TEST);

Receive the Broadcast instance:

/ / dynamically register broadcast public void registerMessageReceiver () {

Mode four: use Application, SharePreference, file storage, database, ContentProvider and so on.

It is to use Application to store some data in a longer life cycle for different activity and other read and write calls, but it is not safe, Application is likely to be recycled, SharePreference and file storage and database are basically stored in the corresponding files, without discussion

Mode five: use the interface:

It is to define an interface that needs to be concerned with the place of the event to implement the interface. Then the event triggered places to register / unregister the controls that are interested in the event. It is the observer pattern, and the problem is obvious, which is often more coupled between different components, and more and more interfaces are also troublesome, space reasons, and unspecific expansion.

To sum up, all kinds of communication methods are more or less problems, such as the way five, the coupling is more serious, especially when the interface is more and more, such as the form of broadcasting, when activity and fragment need to interact, it is not suitable, so we need to use a more simple EventBu. S to solve low – coupling communication between components

Mode six: EventBus:

EventBus class library introduction

EventBus is an optimized Android system class library in publish / bus mode.

 

Decoupling of Event’s senders and receivers can work well in Activities, Fragments, and background threads to avoid complex and error prone dependencies and lifecycle issues that make your code more concise and faster class library smaller (< 50K jar) has been used in greater than 100000000+ already installed Apps. There are some advanced features such as delivery threads, subscriber priorities and so on.

EventBus uses three steps

Define events: public class MessageEvent {/ * Additional fields if needed * /}

Prepare subscribers: eventBus.register (this);

Public void onEvent (AnyEventType event)} / * Do something * /};

Post events:

EventBus.post (event);

The following is an example of EventBus use on the Internet: http://blog.csdn.net/jdsjlzx/article/details/40856535

Http://blog.csdn.net/jdsjlzx/article/details/40856535

EventBus’s problem?

Of course, EventBus is not a panacea, and there are some problems in the process of using, for example, because of the convenience of use, it will cause misuse at some time, instead of making code logic more chaotic, for example, some places will circulate to send messages, and so on. In the later chapter, we will carefully study whether there is a better alternative to Eve. The way of ntBus, such as RxJava?

Android Service active attack and defense

company launched a point in June 15, similar to travel software applications need to upload real-time latitude and longitude, involved in the backstage Service to survive the problem. Because of the business scene, the problem of keeping alive is actually encountered by many developers, and many people ask questions on the Internet, such as: how to make the Android program run in the background, like QQ, WeChat and not killed? There are many respondents, but there are several ways to achieve the desired effect. Next, I will talk about the various plans combined with my own development experience.

The company launched a project in June of 15, which is similar to the application of travel software to upload real-time latitude and longitude, which involves the problem of backstage Service security. Because of the business scene, the problem of keeping alive is actually encountered by many developers, and many people ask questions on the Internet, such as: how to make the Android program run in the background, like QQ, WeChat and not killed? There are many respondents, but there are several ways to achieve the desired effect. Next, I will talk about the various plans combined with my own development experience.

One, why do you want to live?

The source of the survival is because we hope that our service or process can run in the background, but there are all kinds of reasons that cause our hopes to be disillusioned. The main reasons are as follows: 1, Android system recovery; 2, mobile phone manufacturer’s custom management system, such as power management, memory management, and so on; 3, third square. Software; 4, user manual end.

Two. The means of keeping alive

1. Modify the return value of the onStartCommand method of Service

Can the service be restarted when the service is aborted? The general practice is to modify the return value and return START_ STICKY. OnStartCommand () returns an integer value to describe whether the system will continue to start the service after killing the service. There are three kinds of return values:

START_STICKY: if the service process is dropped by kill, the state of the reservation of the service is the start state, but the delivery intent object is not retained, and the system will then try to recreate the service.

START_NOT_STICKY: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, the system will set it to a started state, and the system will not automatically restart the service.

START_REDELIVER_INTENT: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, it will automatically restart the service after a period of time and pass the value of the Intent into it.

[feasibility] see the explanation of the three return values, which looks like the hope of keeping alive. Is it possible to achieve our goal by setting the return value to START_ STICKY or START_REDELIVER INTENT? But in fact, after the actual test, only a few cases and models can be restarted except for the lack of memory.

2, Service onDestory method restarted

After onDestory sends a broadcast and receives the broadcast, it restarts the Service.

@Override

[feasibility] under 4 reasons of appeal, Service was killed, and the APP process was basically dry out, and neither the onDestroy method was executed, so there was no way to start the service.

3. Improve the Service priority

Increase priority in Service registration

< service android:name= “com.dwd.service.LocationService” android:exported= “false” >

[feasibility] this method is invalid for Service, and service has no such attribute.

4. Front desk service

The front desk is considered to be a running service known to the user. When the system needs to release the memory, it will not kill the process, and the front service must have a notification in the status bar.

NotificationCompat.Builder NB = new NotificationCompat.Builder (this);

[feasibility] this method has a certain effect on preventing system recovery and can reduce the probability of recovery, but the system will be dropped by kill in the case of very low memory, and it will not be restarted. And the cleaning tool or manual forced end, the process will be hung up and will not be restarted.

5. Process Guardians

There are two ways to implement this scheme, one is dual service or two processes, starting two services and listening to each other. One is hung, and the other will start up the service. Another way is to pull a sub process from the native layer to the main process in fork.

[feasibility] first way, the two process or the two services will hang up with the application process, so it will not start. When the second ways are killed, it can really wake up, but the Android 5 and above will put the fork out of the process in a process group. When the program master process is hung up, the whole process group will be killed, so it can not be awakened in the Android5.0 and above system by the way of fork.

6, monitoring system broadcasting

By listening to some broadcasts of the system, such as mobile phone boot, Jie Suoping, network connection status change, application status change, and so on, then determine whether Service is alive, if otherwise Service is started.

[feasibility] after the 3.1 version of the Android system, in order to strengthen the system security and optimize the performance of the system broadcast restrictions, the application monitoring mobile phone boot, Jie Suoping, network connection status change and other regular system broadcast after the android3.1, the first installation is not started or the user forced to stop, the application can not be monitored. Hear. In addition, the latest Android N canceled the network switch broadcast, it is really sad, no one can use it.

7. Interoperability between applications

Use the different app processes to use radio to wake up each other, such as Alipay, Taobao, Tmall, and other Ali, such as the app, if open any of the applications, the other Ali app will wake up, in fact, the BAT system is almost all. In addition, a lot of push SDK will also wake up app.

[feasibility] multiple app application wake-up calls need to be related to each other, so that the SDK application wakeup can not be waken up when the user is forced to stop.

8, activity a pixel point

After the application is back to the backstage, another page with only 1 pixels stays on the desktop to keep the front desk and protect himself from the backstage cleaning tools to kill. This scheme is the practice of millet exposure to Tencent QQ.

[feasibility] will still be killed.

9. Install APK to /system/app and transform it to system level application.

[feasibility] this method is only suitable for pre installation applications, and ordinary applications can not be transformed into system level applications.

10, using the account and synchronization mechanism provided by Android system.

Create an account in the application, then open auto synchronization and set the synchronization interval, and use synchronization to wake up app. After the account is established, the account number can be seen in the mobile phone setting – account. The user may delete the account or stop the synchronization, so it is necessary to check whether the account can synchronize regularly.

/ / establish account number

[feasibility] except for Meizu mobile phones, this program can successfully wake up app, no matter how it is killed. In addition, the millet phone needs to close the Shenyin mode. This plan has been put forward for nearly a year, and now many developers are using it.

11, white list

Put the application into the white list of mobile phones or security software to ensure that the process is not reclaimed by the system, such as WeChat and QQ in the white list of millet, so WeChat will not be dried up by the system, but the user can stop.

[feasibility] the success rate of this scheme is good, but users can still manually kill the application. In addition, if the user base is not big enough, the application developers will go to the big manufacturers to talk about it. The domestic Android mobile phone manufacturers are too numerous and too costly. But when the installed capacity and the active users reach WeChat, maybe the manufacturer will take the initiative to add your application to the white list.

I applied 4, 6, 7 and 10 of these four programs to ensure that 90% of the mobile phones were successfully protected. Service is actually a war and defense war, the application in order to demand the need to realize the backstage operation, but the system for performance security and other factors to consider the back of the backstage service. Moreover, Bao Huo is also a protracted war. Maybe the current feasible plan was destroyed by Gordon one day. There is no end to learning, and exploration never stops.

Presto in the point of my use

use reason:

Reasons for use:

Point me to big data developers, BI colleagues need to use hive to query various kinds of data every day, more and more reports business is used to hive. Although the CDH cluster has deployed impala, most of our hive tables use ORC format, and impala is unfriendly to ORC support. Before using presto, I would like to use big data to use hive and go to MapReduce to query related tables. The efficiency of query is low.

Presto introduction:

Presto is an open source distributed SQL query engine, which is suitable for interactive analysis and query, and supports massive data. It is mainly to solve the interactive analysis of commercial data warehouse and to deal with low speed. It supports standard ANSI SQL, including complex queries, aggregation (aggregation), connection (join) and window function (window functions).

Working principle:

The running model of Presto is essentially different from that of Hive or MapReduce. Hive translates queries into multistage MapReduce tasks and runs one after another. Each task reads the input data from the disk and outputs the intermediate results to the disk. However, the Presto engine does not use MapReduce. It uses a custom query and execution engine and response operators to support SQL syntax. In addition to the improved scheduling algorithm, all data processing is carried out in memory. Different processing terminals constitute the pipeline processed through the network. This will avoid unnecessary disk read and write and additional delay. This pipelined execution model runs multiple data processing segments at the same time, and once the data is available, the data will be passed from one processing segment to the next processing segment. Such a way will greatly reduce the end to end response time of various queries.

Use the scene:

1, commonly used hive queries: more and more colleagues have been querying hive through presto. Compared with the MR hive query, the efficiency of Presto has been greatly improved.

Using Sidecar to introduce Node.js into Spring Cloud

Using Sidecar to introduce Node.js into Spring Cloud

theory

brief introduction

Spring Cloud is a popular micro service solution at present. It combines the convenient development of Spring Boot with the rich solution of Netflix OSS. As we all know, Spring Cloud is different from Dubbo and uses Rest services based on HTTP (s) to build the whole service system.

Is it possible to develop some Rest services using some non JVM languages, such as Node.js, which we are familiar with? Yes, of course. However, if only Rest services are available, it is not possible to access the Spring Cloud system. We also want to use the Eureka provided by Spring Cloud for service discovery, use Config Server to do configuration management, and use Ribbon to do client load balancing. At this point, Spring sidecar will be able to show its talents.

Sidecar originated from Netflix Prana. He provides a HTTP API that allows access to all instances of established services, such as host, ports, etc. You can also use an embedded Zuul proxy service to get the relevant routing nodes from Eureka. Spring Cloud Config Server can be accessed directly through the host or through proxy Zuul.

Netflix Prana

What you need to be aware of is the Node.js application you have developed, and you have to implement a health check interface to allow Sidecar to report the health of this service instance to Eureka.

In order to use Sidecar, you can create a Spring Boot program with @EnableSidecar annotation. Let’s look at what this annotation has done.

@EnableSidecar

@EnableCircuitBreaker

Look, hystrix fuse, Eureka service discovery, zuul agent, all of these components have been opened.

Health examination

Next, we need to add the configuration of sidecar.port and sidecar.health-uri in application.yml. The sidecar.port attribute represents the port of the Node.js application listener. This is to enable sidecar to register in Eureka services. sidecar.health-uri is a URI that simulates the interface of Spring Boot application health indicators. It must return the following form of JSON document: health-uri-document

Sidecar.port

Sidecar.health-uri

Sidecar.port

Sidecar.health-uri

Health-uri-document

{

The application.yml of the entire Sidecar application is as follows: application.yml

Application.yml

Application.yml

Server:

Service access

After building this application, you can use the /hosts/{serviceId} API to get the result of DiscoveryClient.getInstances () . Here is an example of returning two instances of information from different /hosts/customers from host. If sidebar runs on the 5678 port, then the Node.js application can access the API via the http://localhost:5678/hosts/{serviceId}.

/hosts/{serviceId}

DiscoveryClient.getInstances ()

/hosts/customers

Http://localhost:5678/hosts/{serviceId}

/hosts/customers

[

Zuul proxy can automatically be registered to the Eureka association to /< serviceId> services add routing, so the customer service can be accessed via the /customers URI. It is also assumed that sidecar is listening on the 5678 port, so our Node.js application can access the customer service by http://localhost:5678/customers.

/< serviceId>

/customers

Http://localhost:5678/customers

Config Server

If we use the Config Server service and register it to Eureka, Node.js application can access it through Zull Proxy. If ConfigServer’s serviceId is configserver and Sidecar listens on the 5678 port, then it can be accessed through the

Configserver

Http://localhost:5678/configserver

Node.js applications can also use the capabilities of Config Server to get some configuration documents, such as YAML format. For example, a access to http://sidecar.local.spring.io:5678/configserver/default-master.yml may get the following return of the YAML document:

Http://sidecar.local.spring.io:5678/configserver/default-master.yml

Eureka:

So the whole architecture of Node.js application accessing to Spring Cloud micro service cluster through Sidecar is roughly shown as follows:

 

Demo practice

Let’s suppose that there is such a very simple data. It is called User:

Class User {

It looks very classic, Kazakhstan!

Another data structure is used to represent books, Book:

Class Book {

The authorId in Book corresponds to the ID of User. Now we need to develop Rest services for these two data.

First, User, we use spring to develop, first in the controller construction method, mock some false data users, and then a very simple Get interface based on the ID user.

@GetMapping (“/{id}”)

After starting, we curl visited:

Curl localhost:8720/12

Next, we use Node.js to develop Book related interfaces.

Because the Node.js community is very active, the optional Rest service framework is very large. The mainstream is express, koa, hapi, and so on, very light and easy to extend like connect. Here I consider the mass base and document richness, and choose to use to develop such a Rest service that can access Spring Cloud.

Express

KOA

Hapi

Connect

Express

Const express = require (‘express’)

It is also first to use faker to the mock100 bar data, and then write a simple get route.

Faker

After startup, we use browser to access http://localhost:3000/book/1.

Http://localhost:3000/book/1

Using Sidecar to introduce Node.js into Spring Cloud

 

Now that we have two micro services, next we launch a Sidecar instance to connect Node.js to Spring Cloud.

@SpringBootApplication

Very simple, it needs to be noted that before this, you need a eureka-server to test the ability of the sidecar agent to access Spring Config, and I also use config-server, believing that students who are familiar with spring cloud should know.

In the configuration of sidecar, bootstrap.yaml simply specifies the address of the service port and the config-server, and the node-sidecar.yaml configuration is as follows:

Node-sidecar.yaml

Eureka:

The address of the node.js service directed by sidecar is specified here, and hystrix.command.default.execution.timeout.enabled: false is mainly because sidecar uses hystrix’s default timeout fuse for a second, and the speed of domestic access to GitHub, as you know, I often visit config-server when I test the test, so I often go out of time, so I dropped it with disable, and you could choose to extend the overtime time.

Hystrix.command.default.execution.timeout.enabled: false

When eureka-server, config-server, user-service, node-sidecar, node-book-service are all started, we open the main page of Eureka

Http://localhost:8700/

Using Sidecar to introduce Node.js into Spring Cloud

 

See that our services are in UP state, indicating that everything is normal. Next, look at the console of the Node.js application:

Using Sidecar to introduce Node.js into Spring Cloud

 

It is found that the traffic has come in and the access interface is /health, which is clearly called node-sidecar’s call to our node application for health checks.

/health

Next is the time to witness miracles. Our curl visits the 8741 port of sidecar:

Curl localhost:8741/user-service/12

Consistent with the results of direct access to user-service, it shows that sidecar Zuul Proxy can proxy our request to user-service services.

Well, with this agent, we hope that book services can provide the interface of author information:

Const SIDECAR = {{

We have access to http://localhost:3000/book/2/author, and you can see the author’s information of bookId to 2. But there is a problem. We do not have access to the Node.js interface by accessing http://localhost:8741/node-sidecar/book/1 as the proxy to user-service, and how to get user-service to get it What about the data? Looking at the first part of the theoretical knowledge, we can access /hosts/< serviceId> to get information about each service, and we’ll try to access http://localhost:8741/hosts/node-sidecar The following results are obtained:

Http://localhost:3000/book/2/author

Http://localhost:8741/node-sidecar/book/1

/hosts/< serviceId>

Http://localhost:8741/hosts/node-sidecar

Using Sidecar to introduce Node.js into Spring Cloud

 

You can see the information in the return information such as the URI of the Node.js application, so is it possible that we can first access the sidecar’s interface, and then get the real URI, and then call book-service’s /books? Uid=< uid> the interface? Of course, in fact, there’s already a tool for spring cloud to do this for us, that is, Feign, new BookFeighClient.java:

/books? Uid=< uid>

Feign

BookFeighClient.java

@FeignClient (name = “node-sidecar”)

FeignClient can automatically find the corresponding service address on the serviceId to Eureka. If there are more than one instance of the service, the client load balance will be used by Ribbon, and a number of RequestMapping – like annotations can be used to keep the client in line with the server controller. By defining this findByUid method, we can easily call the /books? Uid=<, uid> interface defined in the above Node.js. This is also consistent with the sidecar schema we painted above.

FeignClient

RequestMapping

FindByUid

/books? Uid=< uid>

Now, we define a new type of Author in user-service, which inherits from User and adds a books field:

Class Author extends User {

Add an interface to get Author:

@GetMapping (“/author/{id}”)

Logic is also very simple, get the corresponding user, get books from bookFeignClient from uid, and then build author to return.

We visit the http://localhost:8720/author/11 to return the results:

Http://localhost:8720/author/11

Using Sidecar to introduce Node.js into Spring Cloud

 

Well, so far, we have completed the JAVA and Node.js two languages with the help of sidecar and general HTTP protocol to complete the process of calling each other. For more similar configuration information from config-server, access to application information from Eureka and other operations, you can download the source code of my experiment to understand.

I put the whole DEMO in my GitHub, you can directly clone down.

Git clone https://github.com/marshalYuan/spring-cloud-example.git

The whole project is roughly the same:

  • eureka-server / / corresponds to the Eureka Server
  • above.

    Eureka-server / / / / Eureka Server for the above figure

    Config-server / / / / Config Server for the above figure

    SearchPath of config-repo //config-server warehouse address

    The services developed by user-service //java are both service providers (Provider) and consumers (Coustomer).

    Node-sidecar / / a sidecar instance responsible for connecting node and spring-cloud.

    Rest services developed by book-service-by-node //express.js

    You can follow:

    Eureka-server -> config-server -> user-service -> book-service-by-node -> node-sidecar

    In this order, these five applications are started, because demo is used for testing, so I have no bug.

    Write at the end

    As the opening point says, thanks to the general Http protocol and the Netflix rich suite, we can connect many non JVM languages such as Node.js, PHP, and Python to the very mature micro service framework of Spring Cloud to quickly build our micro service business system. You might say why not all use Java? Indeed, the cost of developing and maintaining a single language in a single system is much lower, but there are other situations worth our choosing the sidecar solution.

    For example, the burden of history is too heavy to be cut to the Java platform, but it does not want to rewrite the services of the past, so that it can be integrated at the cost of a unified protocol, from Java to other platforms.

    There is also a saying called “hugging the language dividend”. Choosing a development language means choosing a tool and library corresponding to the programming language. For example, it is now popular to use Python to do data analysis, then the service of this part of the micro service system can be developed with Python; the asynchronous event driver mechanism of Node.js is excellent, and can it be used to develop some services that need to deal with a large number of asynchronous requests; and so on. This is not really triggering the “best language Jihad”. It is thought that the comparison between the advantages and disadvantages of language from the use of scenarios and ecology is to play gangsters. Take all as an example, and I don’t think there is any language that can be understood as simple as the code of Haskell.

    Pythagorean triple

    [(x, y, z) x < – [1..100], y < – [x..100], Z < – Z, + + = = =]

    Besides, our title is Node.js, and the best language is PHP. Escape ~ ~ ~

    Android application performance optimization – startup acceleration

    Android application performance optimization - startup acceleration

    In the recent study of Android performance optimization, the first to solve the problem of jumping Caton when opening a web page in the application, trying to introduce third party WebView components, but introducing another problem, the initialization of the third party WebView component is placed in the Application, which leads to the delay of the App startup time. Long. Today, we will talk about how to optimize acceleration from two aspects of Application and Activity.

    The start time of a.Application acceleration App is the time the user clicks on app icon to the first interface of the app to give the user the time it takes, shortening this time and quickly displaying the first interface to the user, which can greatly improve the user’s experience. There are two main ways to optimize Application, one is to reduce the execution time of the onCreate method in Application, and one is to speed up the first interface with theme and Drawable.

    1. reduce the execution time of the onCreate method

    Using Android studio to build an application, we will find that the start speed is very fast, but as the complexity of the application increases, the integration of the third party components is increasing, and the initialization of the third party components is increasing in onCreate. This is a clear discovery that the start of the App is carton, the first interface is presented. Before the white screen or black screen time increased, this is due to the onCreate execution time is too long. To solve this problem, IntentService can be used to handle time-consuming initialization operations.

    The code of IntentService is as follows:

    Public static void start (Context context) {Intent intent = new Intent (context, DwdInitService.class);};}

    @Override protected void onHandleIntent (Intent intent) {if (intent! = null) {final String action = {}; {} {};}}}

    In Application, start can

    DwdInitService.start (this);

    A X5WebView initialization operation is made, and the effect is quite obvious.

    2. optimize the presentation of the first interface

    As I mentioned earlier, when a App starts, there will always be a white screen or a black screen. It is particularly bad from the user experience. How to eliminate this white screen? Here we can use custom theme and Drawable to solve this problem. Here we use a simple demo as a case: the following renderings

     

    < ImageView android:layout_width= "wrap_content" android:layout_height= "wrap_content" android:layout_gravity= "center" android:layout_marginBottom= "24dp" android:src= "@drawable/ic_launcher";

    < ImageView android:layout_width= "wrap_content" android:layout_height= "wrap_content" android:layout_gravity= "center" android:layout_marginBottom= "24dp" android:src= "@drawable/ic_launcher";

    < /FrameLayout>

    The code is simple, but every time you start, you will find that there will be a white screen before the page is displayed. Now the code is transformed as follows

    A. defines a Drawable of loading.xml

    Setting up the background and logo pictures here

    B.style defines a theme, and windowBackground sets the background to loading.xml.

    & lt; style name=& quot; Theme.Default.NoActionBar& quot; parent=& quot; @style/AppTheme& quot; & plurality; dialectical; dialectical; dialectical; dialectical;

    C. sets the defined theme to LoadingActivity

    Ok, completed, now start App, white screen is missing, user experience has also been improved.

    Optimal acceleration of two.Activity

    After entering App, the speed of the jump between pages is also an important part of the user experience, such as opening an embedded web page, after clicking the trigger button, a card will appear after the jump to the past.

    The optimization of a Activity is also to reduce the execution time of the onCreate method. The onCreate method often consists of two parts. First, setContentView () is used to implement the layout, the first is to initialize and fill data in onCreate.

    The second point is easier to understand, and the time consuming data reading and computing work is minimized in onCreate, and the asynchronous callback can be used to reduce the occupancy of the UI main thread.

    Now, from setContentView, each of the controls in the layout needs to be initialized, arranged, and drawn, which are mostly time consuming operations to reduce the display speed. And in the case of no time-consuming data manipulation in onCreate, monitoring setContentView () through the TraceView tool almost takes up 99% of all time from the beginning of onCreate () to the end of onResume ().

    Reduce the time spent on setContentView:

    1. reduce layout nesting level

    A. uses relative layout

    Reduce the use of linear layout, use relative layout as far as possible, and reduce nesting levels. Nested multiple LinearLayout instances that use layout_weight properties will cost more, because each of the sub layouts will measure two times; although the relative layout is tedious, it can reduce the nesting level and reduce the drawing time.

    B. use

     

    Use

     

    After the merge tag is used, the layout level is reduced accordingly. C. controls its own properties by controlling its own properties, reducing nesting levels, such as common linear arrangement menu layout, as follows

    Android application performance optimization - startup acceleration

     

    & lt; LinearLayout android:layout_width=& quot; match_parent& quot; android:layout_height=& quot; 62dip& quot. EW android:layout_width=& quot; wrap_content& quot; android:layout_height=& quot; wrap_content& quot; android:src=& quot. Out_width=& quot; match_parent& quot; android:layout_height=& quot; wrap_content& quot; android:layout_marginLeft=& quot; 15dp& Ot; android:textSize=& quot; 18sp& quot; /& gt; & lt; /LinearLayout& gt;

    The drawableRight code for using the properties of TextView is as follows

    & lt; TextView android:id=& quot; @+id/my_order& quot; android:layout_width=& quot; match_parent& quot. Quot; 15dip& quot; android:gravity=& quot; center_vertical& quot; android:paddingLeft=& quot; 28dip& quot;

    The amount of code and nesting levels are reduced accordingly, and the effect is perfect.

    2. using ViewStub delay expansion

    ViewStub is a lightweight and invisible view. When needed, it can be used to postpone the expansion of the layout in your own layout. It is also a way to expand the layout by inflate when you need to expand and expand the layout in onResume by setting the flag bit.

    Features: (1).ViewStub can only be Inflate once, and then ViewStub objects will be emptied. According to the sentence, if a layout specified by ViewStub is Inflate, it will not be controlled by ViewStub again. (2).ViewStub can only be used for Inflate layout file instead of a specific View. Of course, View can be written in a layout file. Usage scenario: (1) during the running of a program, a layout will not change after Inflate, unless it is restarted. (complex layout) (2). To control display and hide is a layout file instead of a View. Case: after optimization with ViewStub, the expansion time can be reduced from 1/2 to 2/3.

    Use the code to expand the layout in the onCreate () method.

    ViewStub viewStub = (ViewStub) findViewById (R.id.viewstub_demo_image); viewStub.inflate ();

    Communication between Android components

    Communication between Android components

    First, let’s sort out the ways in which we communicate between different components in Andrew.

    (Tips: below, in addition to file storage and ContentProvider, generally refers to communication within the same process. If you want to achieve cross process communication, it also needs the help of Messenger or AIDL technology, followed by a detailed introduction of time, temporarily not discussed).

    Mode 1: use Intent to pass the value: (between Activity and Activity)

    Value example:

    Intent intent=new Intent (); intent.putExtra (“extra”, “Activity1”); intent.setClass (Activity1.this, Activity2.class); startActivity (intent);

    Value example:

    Intent intent=getIntent (); String data=intent.getStringExtra (“extra”); TextView tv_data= (TextView) findViewById (R.id.tv_data); tv_data.setText (data);

    Mode two: use Binder to transmit values (between Activity and Service).

    1. define Service

    In the Service, define an internal class that inherits from Binder, passing this class, passing the object of the Service to the required Activity, so that the Activity can call the public method and property in the Service, as follows:

    Public class MyService extends Service {/ / instantiate the Binder class that you define.

    2.Activity binding Service

    It is to get the MyService object through the getService of IBinder, and then call the Public method. The code is as follows:

    Public class MyBindingActivity extends Activity {

    Way three: use Broadcast broadcast transmission value

    In fact, it uses Broadcast’s sending and receiving to realize communication.

    Send an instance of Broadcast:

    Static final String ACTION_BROAD_TEST = “com.my.broad.test”; / / / / / / / / send Intent mIntent = new Intent (ACTION_BROAD_TEST);

    Receive the Broadcast instance:

    / / dynamically register broadcast public void registerMessageReceiver () {

    Mode four: use Application, SharePreference, file storage, database, ContentProvider and so on.

    It is to use Application to store some data in a longer life cycle for different activity and other read and write calls, but it is not safe, Application is likely to be recycled, SharePreference and file storage and database are basically stored in the corresponding files, without discussion

    Mode five: use the interface:

    It is to define an interface that needs to be concerned with the place of the event to implement the interface. Then the event triggered places to register / unregister the controls that are interested in the event. It is the observer pattern, and the problem is obvious, which is often more coupled between different components, and more and more interfaces are also troublesome, space reasons, and unspecific expansion.

    To sum up, all kinds of communication methods are more or less problems, such as the way five, the coupling is more serious, especially when the interface is more and more, such as the form of broadcasting, when activity and fragment need to interact, it is not suitable, so we need to use a more simple EventBu. S to solve low – coupling communication between components

    Mode six: EventBus:

    EventBus class library introduction

    EventBus is an optimized Android system class library in publish / bus mode.

     

    Decoupling of Event’s senders and receivers can work well in Activities, Fragments, and background threads to avoid complex and error prone dependencies and lifecycle issues that make your code more concise and faster class library smaller (< 50K jar) has been used in greater than 100000000+ already installed Apps. There are some advanced features such as delivery threads, subscriber priorities and so on.

    EventBus uses three steps

    Define events: public class MessageEvent {/ * Additional fields if needed * /}

    Prepare subscribers: eventBus.register (this);

    Public void onEvent (AnyEventType event)} / * Do something * /};

    Post events:

    EventBus.post (event);

    The following is an example of EventBus use on the Internet: http://blog.csdn.net/jdsjlzx/article/details/40856535

    Http://blog.csdn.net/jdsjlzx/article/details/40856535

    EventBus’s problem?

    Of course, EventBus is not a panacea, and there are some problems in the process of using, for example, because of the convenience of use, it will cause misuse at some time, instead of making code logic more chaotic, for example, some places will circulate to send messages, and so on. In the later chapter, we will carefully study whether there is a better alternative to Eve. The way of ntBus, such as RxJava?

    Android Service active attack and defense

    company launched a point in June 15, similar to travel software applications need to upload real-time latitude and longitude, involved in the backstage Service to survive the problem. Because of the business scene, the problem of keeping alive is actually encountered by many developers, and many people ask questions on the Internet, such as: how to make the Android program run in the background, like QQ, WeChat and not killed? There are many respondents, but there are several ways to achieve the desired effect. Next, I will talk about the various plans combined with my own development experience.

    The company launched a project in June of 15, which is similar to the application of travel software to upload real-time latitude and longitude, which involves the problem of backstage Service security. Because of the business scene, the problem of keeping alive is actually encountered by many developers, and many people ask questions on the Internet, such as: how to make the Android program run in the background, like QQ, WeChat and not killed? There are many respondents, but there are several ways to achieve the desired effect. Next, I will talk about the various plans combined with my own development experience.

    One, why do you want to live?

    The source of the survival is because we hope that our service or process can run in the background, but there are all kinds of reasons that cause our hopes to be disillusioned. The main reasons are as follows: 1, Android system recovery; 2, mobile phone manufacturer’s custom management system, such as power management, memory management, and so on; 3, third square. Software; 4, user manual end.

    Two. The means of keeping alive

    1. Modify the return value of the onStartCommand method of Service

    Can the service be restarted when the service is aborted? The general practice is to modify the return value and return START_ STICKY. OnStartCommand () returns an integer value to describe whether the system will continue to start the service after killing the service. There are three kinds of return values:

    START_STICKY: if the service process is dropped by kill, the state of the reservation of the service is the start state, but the delivery intent object is not retained, and the system will then try to recreate the service.

    START_NOT_STICKY: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, the system will set it to a started state, and the system will not automatically restart the service.

    START_REDELIVER_INTENT: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, it will automatically restart the service after a period of time and pass the value of the Intent into it.

    [feasibility] see the explanation of the three return values, which looks like the hope of keeping alive. Is it possible to achieve our goal by setting the return value to START_ STICKY or START_REDELIVER INTENT? But in fact, after the actual test, only a few cases and models can be restarted except for the lack of memory.

    2, Service onDestory method restarted

    After onDestory sends a broadcast and receives the broadcast, it restarts the Service.

    @Override

    [feasibility] under 4 reasons of appeal, Service was killed, and the APP process was basically dry out, and neither the onDestroy method was executed, so there was no way to start the service.

    3. Improve the Service priority

    Increase priority in Service registration

    < service android:name= “com.dwd.service.LocationService” android:exported= “false” >

    [feasibility] this method is invalid for Service, and service has no such attribute.

    4. Front desk service

    The front desk is considered to be a running service known to the user. When the system needs to release the memory, it will not kill the process, and the front service must have a notification in the status bar.

    NotificationCompat.Builder NB = new NotificationCompat.Builder (this);

    [feasibility] this method has a certain effect on preventing system recovery and can reduce the probability of recovery, but the system will be dropped by kill in the case of very low memory, and it will not be restarted. And the cleaning tool or manual forced end, the process will be hung up and will not be restarted.

    5. Process Guardians

    There are two ways to implement this scheme, one is dual service or two processes, starting two services and listening to each other. One is hung, and the other will start up the service. Another way is to pull a sub process from the native layer to the main process in fork.

    [feasibility] first way, the two process or the two services will hang up with the application process, so it will not start. When the second ways are killed, it can really wake up, but the Android 5 and above will put the fork out of the process in a process group. When the program master process is hung up, the whole process group will be killed, so it can not be awakened in the Android5.0 and above system by the way of fork.

    6, monitoring system broadcasting

    By listening to some broadcasts of the system, such as mobile phone boot, Jie Suoping, network connection status change, application status change, and so on, then determine whether Service is alive, if otherwise Service is started.

    [feasibility] after the 3.1 version of the Android system, in order to strengthen the system security and optimize the performance of the system broadcast restrictions, the application monitoring mobile phone boot, Jie Suoping, network connection status change and other regular system broadcast after the android3.1, the first installation is not started or the user forced to stop, the application can not be monitored. Hear. In addition, the latest Android N canceled the network switch broadcast, it is really sad, no one can use it.

    7. Interoperability between applications

    Use the different app processes to use radio to wake up each other, such as Alipay, Taobao, Tmall, and other Ali, such as the app, if open any of the applications, the other Ali app will wake up, in fact, the BAT system is almost all. In addition, a lot of push SDK will also wake up app.

    [feasibility] multiple app application wake-up calls need to be related to each other, so that the SDK application wakeup can not be waken up when the user is forced to stop.

    8, activity a pixel point

    After the application is back to the backstage, another page with only 1 pixels stays on the desktop to keep the front desk and protect himself from the backstage cleaning tools to kill. This scheme is the practice of millet exposure to Tencent QQ.

    [feasibility] will still be killed.

    9. Install APK to /system/app and transform it to system level application.

    [feasibility] this method is only suitable for pre installation applications, and ordinary applications can not be transformed into system level applications.

    10, using the account and synchronization mechanism provided by Android system.

    Create an account in the application, then open auto synchronization and set the synchronization interval, and use synchronization to wake up app. After the account is established, the account number can be seen in the mobile phone setting – account. The user may delete the account or stop the synchronization, so it is necessary to check whether the account can synchronize regularly.

    / / establish account number

    [feasibility] except for Meizu mobile phones, this program can successfully wake up app, no matter how it is killed. In addition, the millet phone needs to close the Shenyin mode. This plan has been put forward for nearly a year, and now many developers are using it.

    11, white list

    Put the application into the white list of mobile phones or security software to ensure that the process is not reclaimed by the system, such as WeChat and QQ in the white list of millet, so WeChat will not be dried up by the system, but the user can stop.

    [feasibility] the success rate of this scheme is good, but users can still manually kill the application. In addition, if the user base is not big enough, the application developers will go to the big manufacturers to talk about it. The domestic Android mobile phone manufacturers are too numerous and too costly. But when the installed capacity and the active users reach WeChat, maybe the manufacturer will take the initiative to add your application to the white list.

    I applied 4, 6, 7 and 10 of these four programs to ensure that 90% of the mobile phones were successfully protected. Service is actually a war and defense war, the application in order to demand the need to realize the backstage operation, but the system for performance security and other factors to consider the back of the backstage service. Moreover, Bao Huo is also a protracted war. Maybe the current feasible plan was destroyed by Gordon one day. There is no end to learning, and exploration never stops.

    Presto in the point of my use

    use reason:

    Reasons for use:

    Point me to big data developers, BI colleagues need to use hive to query various kinds of data every day, more and more reports business is used to hive. Although the CDH cluster has deployed impala, most of our hive tables use ORC format, and impala is unfriendly to ORC support. Before using presto, I would like to use big data to use hive and go to MapReduce to query related tables. The efficiency of query is low.

    Presto introduction:

    Presto is an open source distributed SQL query engine, which is suitable for interactive analysis and query, and supports massive data. It is mainly to solve the interactive analysis of commercial data warehouse and to deal with low speed. It supports standard ANSI SQL, including complex queries, aggregation (aggregation), connection (join) and window function (window functions).

    Working principle:

    The running model of Presto is essentially different from that of Hive or MapReduce. Hive translates queries into multistage MapReduce tasks and runs one after another. Each task reads the input data from the disk and outputs the intermediate results to the disk. However, the Presto engine does not use MapReduce. It uses a custom query and execution engine and response operators to support SQL syntax. In addition to the improved scheduling algorithm, all data processing is carried out in memory. Different processing terminals constitute the pipeline processed through the network. This will avoid unnecessary disk read and write and additional delay. This pipelined execution model runs multiple data processing segments at the same time, and once the data is available, the data will be passed from one processing segment to the next processing segment. Such a way will greatly reduce the end to end response time of various queries.

    Use the scene:

    1, commonly used hive queries: more and more colleagues have been querying hive through presto. Compared with the MR hive query, the efficiency of Presto has been greatly improved.

    Using Sidecar to introduce Node.js into Spring Cloud

    Using Sidecar to introduce Node.js into Spring Cloud

    theory

    brief introduction

    Spring Cloud is a popular micro service solution at present. It combines the convenient development of Spring Boot with the rich solution of Netflix OSS. As we all know, Spring Cloud is different from Dubbo and uses Rest services based on HTTP (s) to build the whole service system.

    Is it possible to develop some Rest services using some non JVM languages, such as Node.js, which we are familiar with? Yes, of course. However, if only Rest services are available, it is not possible to access the Spring Cloud system. We also want to use the Eureka provided by Spring Cloud for service discovery, use Config Server to do configuration management, and use Ribbon to do client load balancing. At this point, Spring sidecar will be able to show its talents.

    Sidecar originated from Netflix Prana. He provides a HTTP API that allows access to all instances of established services, such as host, ports, etc. You can also use an embedded Zuul proxy service to get the relevant routing nodes from Eureka. Spring Cloud Config Server can be accessed directly through the host or through proxy Zuul.

    Netflix Prana

    What you need to be aware of is the Node.js application you have developed, and you have to implement a health check interface to allow Sidecar to report the health of this service instance to Eureka.

    In order to use Sidecar, you can create a Spring Boot program with @EnableSidecar annotation. Let’s look at what this annotation has done.

    @EnableSidecar

    @EnableCircuitBreaker

    Look, hystrix fuse, Eureka service discovery, zuul agent, all of these components have been opened.

    Health examination

    Next, we need to add the configuration of sidecar.port and sidecar.health-uri in application.yml. The sidecar.port attribute represents the port of the Node.js application listener. This is to enable sidecar to register in Eureka services. sidecar.health-uri is a URI that simulates the interface of Spring Boot application health indicators. It must return the following form of JSON document: health-uri-document

    Sidecar.port

    Sidecar.health-uri

    Sidecar.port

    Sidecar.health-uri

    Health-uri-document

    {

    The application.yml of the entire Sidecar application is as follows: application.yml

    Application.yml

    Application.yml

    Server:

    Service access

    After building this application, you can use the /hosts/{serviceId} API to get the result of DiscoveryClient.getInstances () . Here is an example of returning two instances of information from different /hosts/customers from host. If sidebar runs on the 5678 port, then the Node.js application can access the API via the http://localhost:5678/hosts/{serviceId}.

    /hosts/{serviceId}

    DiscoveryClient.getInstances ()

    /hosts/customers

    Http://localhost:5678/hosts/{serviceId}

    /hosts/customers

    [

    Zuul proxy can automatically be registered to the Eureka association to /< serviceId> services add routing, so the customer service can be accessed via the /customers URI. It is also assumed that sidecar is listening on the 5678 port, so our Node.js application can access the customer service by http://localhost:5678/customers.

    /< serviceId>

    /customers

    Http://localhost:5678/customers

    Config Server

    If we use the Config Server service and register it to Eureka, Node.js application can access it through Zull Proxy. If ConfigServer’s serviceId is configserver and Sidecar listens on the 5678 port, then it can be accessed through the

    Configserver

    Http://localhost:5678/configserver

    Node.js applications can also use the capabilities of Config Server to get some configuration documents, such as YAML format. For example, a access to http://sidecar.local.spring.io:5678/configserver/default-master.yml may get the following return of the YAML document:

    Http://sidecar.local.spring.io:5678/configserver/default-master.yml

    Eureka:

    So the whole architecture of Node.js application accessing to Spring Cloud micro service cluster through Sidecar is roughly shown as follows:

     

    Demo practice

    Let’s suppose that there is such a very simple data. It is called User:

    Class User {

    It looks very classic, Kazakhstan!

    Another data structure is used to represent books, Book:

    Class Book {

    The authorId in Book corresponds to the ID of User. Now we need to develop Rest services for these two data.

    First, User, we use spring to develop, first in the controller construction method, mock some false data users, and then a very simple Get interface based on the ID user.

    @GetMapping (“/{id}”)

    After starting, we curl visited:

    Curl localhost:8720/12

    Next, we use Node.js to develop Book related interfaces.

    Because the Node.js community is very active, the optional Rest service framework is very large. The mainstream is express, koa, hapi, and so on, very light and easy to extend like connect. Here I consider the mass base and document richness, and choose to use to develop such a Rest service that can access Spring Cloud.

    Express

    KOA

    Hapi

    Connect

    Express

    Const express = require (‘express’)

    It is also first to use faker to the mock100 bar data, and then write a simple get route.

    Faker

    After startup, we use browser to access http://localhost:3000/book/1.

    Http://localhost:3000/book/1

    Using Sidecar to introduce Node.js into Spring Cloud

     

    Now that we have two micro services, next we launch a Sidecar instance to connect Node.js to Spring Cloud.

    @SpringBootApplication

    Very simple, it needs to be noted that before this, you need a eureka-server to test the ability of the sidecar agent to access Spring Config, and I also use config-server, believing that students who are familiar with spring cloud should know.

    In the configuration of sidecar, bootstrap.yaml simply specifies the address of the service port and the config-server, and the node-sidecar.yaml configuration is as follows:

    Node-sidecar.yaml

    Eureka:

    The address of the node.js service directed by sidecar is specified here, and hystrix.command.default.execution.timeout.enabled: false is mainly because sidecar uses hystrix’s default timeout fuse for a second, and the speed of domestic access to GitHub, as you know, I often visit config-server when I test the test, so I often go out of time, so I dropped it with disable, and you could choose to extend the overtime time.

    Hystrix.command.default.execution.timeout.enabled: false

    When eureka-server, config-server, user-service, node-sidecar, node-book-service are all started, we open the main page of Eureka

    Http://localhost:8700/

    Using Sidecar to introduce Node.js into Spring Cloud

     

    See that our services are in UP state, indicating that everything is normal. Next, look at the console of the Node.js application:

    Using Sidecar to introduce Node.js into Spring Cloud

     

    It is found that the traffic has come in and the access interface is /health, which is clearly called node-sidecar’s call to our node application for health checks.

    /health

    Next is the time to witness miracles. Our curl visits the 8741 port of sidecar:

    Curl localhost:8741/user-service/12

    Consistent with the results of direct access to user-service, it shows that sidecar Zuul Proxy can proxy our request to user-service services.

    Well, with this agent, we hope that book services can provide the interface of author information:

    Const SIDECAR = {{

    We have access to http://localhost:3000/book/2/author, and you can see the author’s information of bookId to 2. But there is a problem. We do not have access to the Node.js interface by accessing http://localhost:8741/node-sidecar/book/1 as the proxy to user-service, and how to get user-service to get it What about the data? Looking at the first part of the theoretical knowledge, we can access /hosts/< serviceId> to get information about each service, and we’ll try to access http://localhost:8741/hosts/node-sidecar The following results are obtained:

    Http://localhost:3000/book/2/author

    Http://localhost:8741/node-sidecar/book/1

    /hosts/< serviceId>

    Http://localhost:8741/hosts/node-sidecar

    Using Sidecar to introduce Node.js into Spring Cloud

     

    You can see the information in the return information such as the URI of the Node.js application, so is it possible that we can first access the sidecar’s interface, and then get the real URI, and then call book-service’s /books? Uid=< uid> the interface? Of course, in fact, there’s already a tool for spring cloud to do this for us, that is, Feign, new BookFeighClient.java:

    /books? Uid=< uid>

    Feign

    BookFeighClient.java

    @FeignClient (name = “node-sidecar”)

    FeignClient can automatically find the corresponding service address on the serviceId to Eureka. If there are more than one instance of the service, the client load balance will be used by Ribbon, and a number of RequestMapping – like annotations can be used to keep the client in line with the server controller. By defining this findByUid method, we can easily call the /books? Uid=<, uid> interface defined in the above Node.js. This is also consistent with the sidecar schema we painted above.

    FeignClient

    RequestMapping

    FindByUid

    /books? Uid=< uid>

    Now, we define a new type of Author in user-service, which inherits from User and adds a books field:

    Class Author extends User {

    Add an interface to get Author:

    @GetMapping (“/author/{id}”)

    Logic is also very simple, get the corresponding user, get books from bookFeignClient from uid, and then build author to return.

    We visit the http://localhost:8720/author/11 to return the results:

    Http://localhost:8720/author/11

    Using Sidecar to introduce Node.js into Spring Cloud

     

    Well, so far, we have completed the JAVA and Node.js two languages with the help of sidecar and general HTTP protocol to complete the process of calling each other. For more similar configuration information from config-server, access to application information from Eureka and other operations, you can download the source code of my experiment to understand.

    I put the whole DEMO in my GitHub, you can directly clone down.

    Git clone https://github.com/marshalYuan/spring-cloud-example.git

    The whole project is roughly the same:

  • eureka-server / / corresponds to the Eureka Server
  • above.

    Eureka-server / / / / Eureka Server for the above figure

    Config-server / / / / Config Server for the above figure

    SearchPath of config-repo //config-server warehouse address

    The services developed by user-service //java are both service providers (Provider) and consumers (Coustomer).

    Node-sidecar / / a sidecar instance responsible for connecting node and spring-cloud.

    Rest services developed by book-service-by-node //express.js

    You can follow:

    Eureka-server -> config-server -> user-service -> book-service-by-node -> node-sidecar

    In this order, these five applications are started, because demo is used for testing, so I have no bug.

    Write at the end

    As the opening point says, thanks to the general Http protocol and the Netflix rich suite, we can connect many non JVM languages such as Node.js, PHP, and Python to the very mature micro service framework of Spring Cloud to quickly build our micro service business system. You might say why not all use Java? Indeed, the cost of developing and maintaining a single language in a single system is much lower, but there are other situations worth our choosing the sidecar solution.

    For example, the burden of history is too heavy to be cut to the Java platform, but it does not want to rewrite the services of the past, so that it can be integrated at the cost of a unified protocol, from Java to other platforms.

    There is also a saying called “hugging the language dividend”. Choosing a development language means choosing a tool and library corresponding to the programming language. For example, it is now popular to use Python to do data analysis, then the service of this part of the micro service system can be developed with Python; the asynchronous event driver mechanism of Node.js is excellent, and can it be used to develop some services that need to deal with a large number of asynchronous requests; and so on. This is not really triggering the “best language Jihad”. It is thought that the comparison between the advantages and disadvantages of language from the use of scenarios and ecology is to play gangsters. Take all as an example, and I don’t think there is any language that can be understood as simple as the code of Haskell.

    Pythagorean triple

    [(x, y, z) x < – [1..100], y < – [x..100], Z < – Z, + + = = =]

    Besides, our title is Node.js, and the best language is PHP. Escape ~ ~ ~

    The past and present of JavaScript’s prototype and prototype chain (1)

    The past and present of JavaScript

    Don’t be frightened by the name of this feeling. I’m not going to tell the history of the prototype . This article just wants to help you understand why the prototype and the prototype chain is a unique language, and other languages (or the programming languages I’ve learned) have not seen this one. This is also the most puzzling puzzle for me to learn JavaScript from the C language.

    prototype

    1. First from JavaScript to create objects

    As we all know, JavaScript is an object oriented language, but there is no concept of class (except the current ES6 standard). Personally feel that ES6 is a new standard encapsulated on ES5, and its essence is ES5. Therefore, mastering ES5 is the essence. There is no concept of class, but there must be the concept of object, and the object of JS is different from other object oriented languages (such as C++). Each object is based on a reference type, (such as Array/Date/Function, etc. that belongs to the reference type, with a specific reference to the fifth chapter of “JavaScript advanced programming (Third Edition)”) or a custom type.

    reference type

    The most common way to create objects before was (by creating a Object instance):

    Var animal = new Object ();

    After that, a method of creating an object literal is presented.

    Var Animal= {

    First of all, it is clear that an object must contain attributes and methods: name and type are definitely attributes, and say is definitely a method. Second, attributes have corresponding properties inside the browser, which are used by the internal JS engine.

    1.1, talk about attributes in JS objects (Property).

    Property

    According to ES5 standard, attribute in addition to what we know in the impression of a name, a value, similar to the form of key value pairs, in fact, there is a large article inside the browser inside.

    ES5

    Attributes are divided into data attributes (‘Data Property’) and accessor properties (Accessor Property). The name and type that have just been defined are data attributes, and the basis for distinguishing data attributes and accessor data is that the accessor property has a [[Get]] and [[Set]] method and it does not contain a [[value]] feature (Attribute).

    Accessor Property

    [[Get]]

    [[Set]]

    [[value]]

    Attribute

    The data attribute contains 4 features: [[configurable]], [[Enumerable]], [[Writable]], and [[Value]].

    [[configurable]]

    [[Enumerable]]

    [[Writable]]

    [[Value]]

    The accessor property contains 4 properties: [[Configurable]], [[Enumerable]], [[Get]], [[Set]].

    [[Configurable]]

    [[Enumerable]]

    [[Get]]

    [[Set]]

    Although these features are internal use of browsers, ES5 still provides the interface for us to call:

    Object.defineProperty (obj, prop, descriptor);

    Take an example (in the Chrome console):

    > Object.getOwnPropertyDescriptor (Person, “name”)

    These 4 API details (such as compatibility) can be referred to: MDN

    MDN

    2. Create the advance of the JS object

    Although both the Object constructor or the object literal can be used to create a single object, it is obvious that there is a clear flaw: this method of mixing object creation and object instantiation directly leads to code cannot be reused and a heap of repeated code will be generated, so to solve this problem This problem has created a new way to create objects — factory mode . This form begins to approach the instantiation of classes and objects in the C++ language, and is closer to the actual code development.

    Constructor

    Factory model

    2.1. Factory model

    A very image of the name, as soon as we hear the name, we know that there is a factory at that time. As long as we provide the raw materials, we can use the mold of the factory to help us create the objects we want (that is, the instantiation process).

    Because ES5 is unable to create classes, you can only use functions to encapsulate the details of creating objects with specific interfaces. For example:

    Function createAnimal (name, type) {

    Although this approach solves the problem of object instantiation code repetition, it does not solve the problem of object recognition (that is, the object created in this way can’t know its type, for example, the object created by the second previous methods can know that the type of its object is Person). So there is another way to create objects.

    2.2, constructor pattern

    Constructors are a basic concept in the C++ language, whose function is to initialize calls after instantiating an object, and to perform some replication operations, which can be considered as an initialization function. Similarly, the constructor used by JS is different from C++ in nature, but its essence is the same. JS provides some native constructors such as Object/Array/String, etc., and can also create custom ones. For example:

    Object/Array/String

    Function Animal (name, type) {

    This constructor has the following three features:

  • does not explicitly create object
  • There is no explicit creation of objects

    Assign attributes and methods directly to the this object

    No return statement

    In performing new operations, you will experience the following 4 steps:

  • creates an object
  • Create an object

    Assign the scope of the constructor to the new object (so the this pointer points to the new object).

    Execute the code in the constructor

    Return a new object

    At this time dog is an instance of Animal. According to the tradition of the C++ language, each instance must have an attribute called constructor, and JS is the same, and the constructor attribute of the instance in JS points to the Animal constructor.

    The first three methods create the same object, so take one and compare it with factory mode.

     

    It can be seen that the constructor method is indeed more than one attribute, and as to why these attributes are concentrated under __proto__, it is what we have to mention later.

    __proto__

    So we can identify the type of the object (such as the Animal type in this example) by the constructor attribute, which is verified by using instanceof.

    Instanceof

    Of course, using the constructor is not perfect, the main problem with using the constructor is that each method is created in each instance, that is, when we create the Animal object, the method inside is actually an instance of the Function object, that is, the same as:

    This.say = new Function (console.log (‘I am a ‘+ this.type);

    This leads to the creation of multiple instances of multiple function objects, which obviously increases memory consumption, and in order to solve this problem, we introduce the prototype mode.

    3, prototype model

    In Figure 1, we have found that each object, whatever the method is created, has a __proto__ attribute, which is the key to the connection of the prototype chain. In the last section, we have 4 steps to perform the operation of the new, of which second steps are such as According to the code, it is an assignment operation, that is, dog.__proto__ = Animal.prototype, which can be seen through console printing:

    __proto__

    __proto__

    Dog.__proto__ = Animal.prototype

    The past and present of JavaScript

     

    The way to create prototype patterns is to adopt such a form:

    Function Animal () {}

    The difference between prototype mode and constructor mode can be seen in the following diagram:

    Constructor pattern:

    The past and present of JavaScript

     

    Prototype mode:

    The past and present of JavaScript

     

    From the above two pictures, we can see the advantages and disadvantages of the prototype, and how to improve it. Can the integration of the two make full use of the advantages of the two? If you think so, that means you are right. This is the 3.2 subsection.

    In addition to the prototype assignment operation, we prefer to use object literal or new keyword to manipulate the prototype. But the use of object literals or new keywords has a very important knowledge point: , whether using object literal or new keywords, is to create a new object to replace the original prototype object .

    Whether using object literal or new keyword, it is to create a new object to replace the original prototype object.

    Is it abstract? A picture to tell you the truth:

    Code:

    Function Animal () {}

    Or:

    Function Species () {

    The prototype pattern diagrams of the two ones are illustrated.

    The past and present of JavaScript

     

    So it is because of this overridden effect that must pay attention to whether the prototype object is the original prototype object when you use this method.

    When you use this method, you must pay attention to whether the prototype object is the original prototype object.

    Both of the two have the advantages. How do they use the combination? What does the prototype chain have to do with the prototype?

    Look at the next chapter: , the past and present life of the prototype and prototype chain of JavaScript (two)

    The past and present of the prototype and prototype chain of JavaScript (two)