Binder 驱动是 Android 主要的跨进程通信工具,在整个跨进程通信过程中,客户端会先去 ServiceManager 查询服务的远程引用,拿到远程引用后向服务端所在线程发起请求,最后服务端将结果回传给客户端。
本文以 ActivityManager 的启动和处理请求为例,介绍 Binder 机制的实现,全文分以下几个部分:
1) ServiceManager 的启动
2) ActivityManager 的启动
3) SystemServer 进程监听请求
4) ActivityManager 的使用
5)关于 Messenger 和 AIDL
1、 ServiceManager 的启动 init.rc
启动脚本:
1 2 3 4 5 6 7 8 9 service servicemanager /system/bin/servicemanager class core user system group system critical onrestart restart zygote onrestart restart media onrestart restart surfaceflinger onrestart restart drm
入口在 service_manager.c
的 main
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 int main (int argc, char **argv) { struct binder_state *bs ; void *svcmgr = BINDER_SERVICE_MANAGER; bs = binder_open (128 *1024 ); if (binder_become_context_manager (bs)) { LOGE ("cannot become context manager (%s)\n" , strerror (errno)); return -1 ; } svcmgr_handle = svcmgr; binder_loop (bs, svcmgr_handler); return 0 ; }
在这个函数中主要做了3件事情: * 打开 *binder* 设备,将 *128K* 内存同时映射到内核空间和 *ServiceManager* 进程空间; * ServiceManager\ 进程成为 *Binder* 的管理者; * 进入消息循环,等待客户端请求。
1)打开 Binder 设备 每个要与 Binder 驱动交互的进程,都需要打开 Binder 设备, Binder 设备是一个虚拟设备,对应 "/dev/binder"
文件。
打开 Binder 设备的实现在 binder.c
的 binder_open
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 struct binder_state { int fd; void *mapped; unsigned mapsize; }; struct binder_state *binder_open (unsigned mapsize) { struct binder_state *bs ; ...... bs->fd = open ("/dev/binder" , O_RDWR); ...... bs->mapsize = mapsize; bs->mapped = mmap (NULL , mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0 ); ...... return bs; ...... }
其中 open("/dev/binder", O_RDWR)
经过系统调用,最终进入到kernel层 binder.c
的 binder_open
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 struct binder_proc { ...... struct rb_root threads ; struct rb_root nodes ; ...... }; static int binder_open (struct inode *nodp, struct file *filp) { struct binder_proc *proc ; ...... proc->tsk = current; INIT_LIST_HEAD (&proc->todo); init_waitqueue_head (&proc->wait); proc->default_priority = task_nice (current); proc->pid = current->group_leader->pid; ...... filp->private_data = proc; ...... return 0 ; }
open
函数主要是初始化了一个 binder_proc 结构体,该结构体保存了当前进程的一些重要信息,包括进程中所有处理 Binder 事务的线程、注册过的服务对应的对象地址等,然后将 proc 结构体保存进 file 结构体的 private_data 成员变量中。
mmap
函数有一个地方需要注意,它将一块物理内存同时映射到内核空间和 ServiceManager 进程空间,这么做有一个好处,当数据从客户端空间拷贝进内核空间时,内核和 ServiceManager 都能共享这部分数据,就省了一次再从内核空间拷贝到 ServiceManager 的过程。
2) ServiceManager 进程成为 Binder 的管理者 binder.c
的 binder_become_context_manager
函数:
1 2 3 4 int binder_become_context_manager (struct binder_state *bs) { return ioctl (bs->fd, BINDER_SET_CONTEXT_MGR, 0 ); }
ioctl
是 Binder 通信机制中核心的处理函数,大部分与 Binder 驱动交互都通过该命令实现,其实现在 binder.c
的 binder_ioctl
函数中:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 static long binder_ioctl (struct file *filp, unsigned int cmd, unsigned long arg) { int ret; struct binder_proc *proc = filp->private_data; ...... struct binder_thread *thread ; thread = binder_get_thread (proc); ...... switch (cmd) { case BINDER_WRITE_READ: ...... break ; case BINDER_SET_MAX_THREADS: ...... break ; case BINDER_SET_CONTEXT_MGR: ...... break ; case BINDER_THREAD_EXIT: ...... break ; case BINDER_VERSION: ...... break ; default : ...... } ...... }
那么我们关注命令 BINDER_SET_CONTEXT_MGR 的实现:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 static long binder_ioctl (struct file *filp, unsigned int cmd, unsigned long arg) { ...... switch (cmd) { ...... case BINDER_SET_CONTEXT_MGR: ...... binder_context_mgr_uid = current->cred->euid; binder_context_mgr_node = binder_new_node (proc, NULL , NULL ); ...... break ; ...... } ...... }
该函数做了两件事情,一个是将 ServiceManager 进程的 uid 设置给 binder_context_mgr_uid 全局变量,一个是新建一个 binder_node 对象,保存了 ServiceManager 进程信息,并将该对象保存在 binder_context_mgr_node 全局变量中,等到有客户端请求 ServiceManager 时提供。
binder_node 是 Binder 通信中最核心的结构体,之后的每一个服务, Binder 驱动都会为其创建一个 binder_node 结构体,保存其对象地址、进程信息等, ServiceManager 中存放的和用户所拿到的所谓服务的远程引用,就是指这个 binder_node 的引用。
3)进入消息循环 执行完 ioctl
的 BINDER_SET_CONTEXT_MGR 命令后,回到 service_manager.c
的 main
中,执行下一步 binder_loop
进入循环:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 void binder_loop (struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr ; unsigned readbuf[32 ]; bwr.write_size = 0 ; bwr.write_consumed = 0 ; bwr.write_buffer = 0 ; readbuf[0 ] = BC_ENTER_LOOPER; binder_write (bs, readbuf, sizeof (unsigned )); for (;;) { bwr.read_size = sizeof (readbuf); bwr.read_consumed = 0 ; bwr.read_buffer = (unsigned ) readbuf; res = ioctl (bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0 ) { LOGE ("binder_loop: ioctl failed (%s)\n" , strerror (errno)); break ; } res = binder_parse (bs, 0 , readbuf, bwr.read_consumed, func); if (res == 0 ) { LOGE ("binder_loop: unexpected reply?!\n" ); break ; } if (res < 0 ) { LOGE ("binder_loop: io error %d %s\n" , res, strerror (errno)); break ; } } }
首先调用了 binder_write
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 int binder_write (struct binder_state *bs, void *data, unsigned len) { struct binder_write_read bwr ; int res; bwr.write_size = len; bwr.write_consumed = 0 ; bwr.write_buffer = (unsigned ) data; bwr.read_size = 0 ; bwr.read_consumed = 0 ; bwr.read_buffer = 0 ; res = ioctl (bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0 ) { fprintf (stderr,"binder_write: ioctl failed (%s)\n" , strerror (errno)); } return res; }
看一下 binder_ioctl
的 BINDER_WRITE_READ 命令的实现:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 static long binder_ioctl (struct file *filp, unsigned int cmd, unsigned long arg) { ...... switch (cmd) { ...... case BINDER_WRITE_READ: { struct binder_write_read bwr ; ...... if (copy_from_user (&bwr, ubuf, sizeof (bwr))) { ret = -EFAULT; goto err; } ...... if (bwr.write_size > 0 ) { ret = binder_thread_write (proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed); trace_binder_write_done (ret); if (ret < 0 ) { bwr.read_consumed = 0 ; if (copy_to_user (ubuf, &bwr, sizeof (bwr))) ret = -EFAULT; goto err; } } if (bwr.read_size > 0 ) { ...... } ...... if (copy_to_user (ubuf, &bwr, sizeof (bwr))) { ret = -EFAULT; goto err; } break ; } ...... } ...... } int binder_thread_write (struct binder_proc *proc, struct binder_thread *thread, void __user *buffer, int size, signed long *consumed) { ...... case BC_ENTER_LOOPER: ...... if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) { thread->looper |= BINDER_LOOPER_STATE_INVALID; ...... } thread->looper |= BINDER_LOOPER_STATE_ENTERED; break ; ...... }
该命令主要将进程的运行状态 looper 设置为 BINDER_LOOPER_STATE_ENTERED 状态,表示该线程进入循环。
回到 binder_loop
中,进入 for 循环:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 void binder_loop (struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr ; unsigned readbuf[32 ]; bwr.write_size = 0 ; bwr.write_consumed = 0 ; bwr.write_buffer = 0 ; readbuf[0 ] = BC_ENTER_LOOPER; binder_write (bs, readbuf, sizeof (unsigned )); for (;;) { bwr.read_size = sizeof (readbuf); bwr.read_consumed = 0 ; bwr.read_buffer = (unsigned ) readbuf; res = ioctl (bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0 ) { LOGE ("binder_loop: ioctl failed (%s)\n" , strerror (errno)); break ; } res = binder_parse (bs, 0 , readbuf, bwr.read_consumed, func); if (res == 0 ) { LOGE ("binder_loop: unexpected reply?!\n" ); break ; } if (res < 0 ) { LOGE ("binder_loop: io error %d %s\n" , res, strerror (errno)); break ; } } }
此时, write_size 为 0, read_size 为 32 * 4,进入到 binder_ioctl
的 BINDER_WRITE_READ 命令:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 static long binder_ioctl (struct file *filp, unsigned int cmd, unsigned long arg) { ...... switch (cmd) { ...... case BINDER_WRITE_READ: { struct binder_write_read bwr ; ...... if (copy_from_user (&bwr, ubuf, sizeof (bwr))) { ret = -EFAULT; goto err; } ...... if (bwr.write_size > 0 ) { ...... } if (bwr.read_size > 0 ) { ret = binder_thread_read (proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); trace_binder_read_done (ret); if (!list_empty (&proc->todo)) wake_up_interruptible (&proc->wait); if (ret < 0 ) { if (copy_to_user (ubuf, &bwr, sizeof (bwr))) ret = -EFAULT; goto err; } } ...... if (copy_to_user (ubuf, &bwr, sizeof (bwr))) { ret = -EFAULT; goto err; } break ; } ...... } ...... } static int binder_thread_read (struct binder_proc *proc, struct binder_thread *thread, void __user *buffer, int size, signed long *consumed, int non_block) { ...... wait_for_proc_work = thread->transaction_stack == NULL && list_empty (&thread->todo); ...... thread->looper |= BINDER_LOOPER_STATE_WAITING; if (wait_for_proc_work) { ...... ret = wait_event_freezable_exclusive (proc->wait, binder_has_proc_work (proc, thread)); } ...... }
函数执行到 binder_thread_read
,当没有任务需要处理时,当前线程(即 ServiceManager 的主线程)进入休眠,等待客户端请求唤醒,至此, ServiceManger 的启动就完成了
2、ActivityManagerService的启动 下面以 ActivityManagerService 的启动为例,init.rc
启动脚本:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 service servicemanager /system/bin/servicemanager class core user system group system critical onrestart restart zygote onrestart restart media onrestart restart surfaceflinger onrestart restart drm ...... service zygote /system /bin /app_process -Xzygote /system /bin --zygote --start -system -server class main socket zygote stream 666 onrestart write /sys /android_power /request_state wake onrestart write /sys /power /state on onrestart restart media onrestart restart netd
在启动脚本中,启动完 ServiceManager 后,启动 zygote 进程,然后再启动 SystemServer 进程,进入到 SystemServer
的 main
入口:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 public class SystemServer { ...... public static void main (String[] args) { ...... init1(args); } public static final void init2 () { Slog.i(TAG, "Entered the Android system server!" ); Thread thr = new ServerThread(); thr.setName("android.server.ServerThread" ); thr.start(); } class ServerThread extends Thread { ....... public void run () { ...... ActivityManagerService.setSystemProcess(); ...... } } }
在 main
函数中,调用了 init1
初始化函数,该函数的实现在 native 层,其主要初始化一些进程参数,按需求启动一些服务,然后反射调用 Java 层的 init2
函数,最后进入消息循环。
在 init2
函数中,启动一条 ServerThread 线程,用来加载大量的系统服务,如 ActivityManagerService、 *WindowManagerService 、 *PowerManagerService 等。
在 ServerThread 线程的 run
中进入到 ActivityManagerService
的 setSystemProcess
函数:
1 2 3 4 5 6 7 8 public class ActivityManagerService { public static void setSystemProcess () { ...... ActivityManagerService m = mSelf; ServiceManager.addService("activity" , m); ...... } }
这里调用了 ServiceManager
的 addService
函数将 ActivityManagerService 注册进 ServiceManager 中:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 public final class ServiceManager { ...... private static IServiceManager getIServiceManager () { if (sServiceManager != null ) { return sServiceManager; } sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject()); return sServiceManager; } ...... public static void addService (String name, IBinder service) { try { getIServiceManager().addService(name, service); } catch (RemoteException e) { Log.e(TAG, "error in addService" , e); } } ...... }
这里主要完成了2件事情: * 获取 *ServiceManager* 的远程引用 * 将自身注册到 *ServiceManager* 中
1)获取 ServiceManager 的远程引用 先看 BinderInternal
的 getContextObject
函数:
1 2 3 4 5 6 public class BinderInternal { ...... public static final native IBinder getContextObject () ; ...... }
进入到 native 层,其实现在 android_util_Binder.cpp
的 android_os_BinderInternal_getContextObject
中:
1 2 3 4 5 static jobject android_os_BinderInternal_getContextObject (JNIEnv* env, jobject clazz) { sp<IBinder> b = ProcessState::self ()->getContextObject (NULL ); return javaObjectForIBinder (env, b); }
这里先调用 ProcessState
的 self
函数初始化自身:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 sp<ProcessState> ProcessState::self () { if (gProcess != NULL ) return gProcess; AutoMutex _l(gProcessMutex); if (gProcess == NULL ) gProcess = new ProcessState; return gProcess; } ProcessState::ProcessState () : mDriverFD (open_driver ()) , mVMStart (MAP_FAILED) , mManagesContexts (false ) , mBinderContextCheckFunc (NULL ) , mBinderContextUserData (NULL ) , mThreadPoolStarted (false ) , mThreadPoolSeq (1 ) { ...... } static int open_driver () { int fd = open ("/dev/binder" , O_RDWR); ...... int vers; status_t result = ioctl (fd, BINDER_VERSION, &vers); ...... size_t maxThreads = 15 ; result = ioctl (fd, BINDER_SET_MAX_THREADS, &maxThreads); ...... return fd; }
self
函数实际上新建了一个 ProcessState 对象,该对象是进程范围内单例, ProcessState 对象在初始化时,打开了 Binder 设备。
上文提到,open
函数调用到 kernel 层的 binder_open
函数,其主要完成了初始化一个 binder_proc 结构体 proc 保存当前进程一些信息,并把 proc 变量保存在 binder 虚拟设备文件结构体 file 的 private_data 变量中,供后续处理使用,最后返回 binder 虚拟设备的文件描述符。
回到 android_os_BinderInternal_getContextObject
, ProcessState 初始化后调用 getContextObject
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 sp<IBinder> ProcessState::getContextObject (const sp<IBinder>& caller) { return getStrongProxyForHandle (0 ); } sp<IBinder> ProcessState::getStrongProxyForHandle (int32_t handle) { sp<IBinder> result; ...... b = new BpBinder (handle); result = b; ...... return result; }
该函数初始化了一个 BpBinder 对象,其持有的引用为0,即 ServiceManager 的远程引用,那么,
1 2 3 4 5 static jobject android_os_BinderInternal_getContextObject (JNIEnv* env, jobject clazz) { sp<IBinder> b = ProcessState::self ()->getContextObject (NULL ); return javaObjectForIBinder (env, b); }
就相当于:
1 2 3 4 5 static jobject android_os_BinderInternal_getContextObject (JNIEnv* env, jobject clazz) { sp<IBinder> b = new BpBinder (0 ); return javaObjectForIBinder (env, b); }
看一下 BpBinder 的初始化:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 BpBinder::BpBinder (int32_t handle) : mHandle (handle) , mAlive (1 ) , mObitsSent (0 ) , mObituaries (NULL ) { ...... IPCThreadState::self ()->incWeakHandle (handle); } IPCThreadState* IPCThreadState::self () { ...... return new IPCThreadState; ...... } IPCThreadState::IPCThreadState () : mProcess (ProcessState::self ()), mMyThreadId (androidGetTid ()), mStrictModePolicy (0 ), mLastTransactionBinderFlags (0 ) { ...... }
BpBinder 先将引用保存在成员变量 mHandle 中,然后初始化一个 IPCThreadState 对象,该对象是线程范围内单例,将刚才创建的 ProcessState 对象保存在 mProcess 成员变量中,之后与 Binder 驱动的交互都是直接在 IPCThreadState 中进行。
回到 getContextObject
函数:
1 2 3 4 5 6 static jobject android_os_BinderInternal_getContextObject (JNIEnv* env, jobject clazz) { sp<IBinder> b = new BpBinder (0 ); return javaObjectForIBinder (env, b); }
最终新建一个 BpBinder 对象,包含了 ServiceManager 的远程引用,并转换为 Java 层的 BinderProxy 对象, BpBinder 对象的地址保存在 BinderProxy 的 mObject 中。
回到 Java 层的 ServiceManager
中:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 public final class ServiceManager { ...... private static IServiceManager getIServiceManager () { if (sServiceManager != null ) { return sServiceManager; } sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject()); return sServiceManager; } ...... public static void addService (String name, IBinder service) { try { getIServiceManager().addService(name, service); } catch (RemoteException e) { Log.e(TAG, "error in addService" , e); } } ...... }
接下来进入 asInterface
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 public abstract class ServiceManagerNative extends Binder implements IServiceManager { ...... static public IServiceManager asInterface (IBinder obj) { if (obj == null ) { return null ; } IServiceManager in = (IServiceManager)obj.queryLocalInterface(descriptor); if (in != null ) { return in; } return new ServiceManagerProxy(obj); } ...... }
其中:
1 2 3 4 5 6 7 final class BinderProxy implements IBinder { public IInterface queryLocalInterface (String descriptor) { return null ; } }
BinderInternal.getContextObject()
返回的是一个 BinderProxy 对象,进入 asInterface
函数,queryLocalInterface
返回 null ,最终新建并返回一个 ServiceManagerProxy 对象,:
1 2 3 4 5 6 7 class ServiceManagerProxy implements IServiceManager { public ServiceManagerProxy (IBinder remote) { mRemote = remote; } }
那么这个 ServiceManagerProxy 对象中成员变量 mRemote 保存了一个 BinderPorxy 对象,这个 BinderProxy 对象的 mObject 保存了 native 层的一个 BpBinder 对象,这个 BpBinder 对象持有对 ServiceManager 的远程引用。
到这里就成功获取了一个持有 ServiceManager 的远程引用的代理对象 ServiceManagerProxy 。
2)将服务注册到 ServiceManager 中 回到 ServiceManager
的 addService
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 public final class ServiceManager { ...... private static IServiceManager getIServiceManager () { if (sServiceManager != null ) { return sServiceManager; } sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject()); return sServiceManager; } ...... public static void addService (String name, IBinder service) { try { getIServiceManager().addService(name, service); } catch (RemoteException e) { Log.e(TAG, "error in addService" , e); } } ...... }
最终调用了 ServiceManagerProxy
的 addService
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 class ServiceManagerProxy implements IServiceManager { public ServiceManagerProxy (IBinder remote) { mRemote = remote; } public void addService (String name, IBinder service) throws RemoteException { Parcel data = Parcel.obtain(); Parcel reply = Parcel.obtain(); data.writeInterfaceToken(IServiceManager.descriptor); data.writeString(name); data.writeStrongBinder(service); mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0 ); reply.recycle(); data.recycle(); } }
mRemote 是一个 BinderProxy 对象,所以调用 BinderProxy
的 transact
函数:
1 2 3 4 5 6 final class BinderProxy implements IBinder { public native boolean transact (int code, Parcel data, Parcel reply, int flags) throws RemoteException ; }
这是一个 native 函数,调用了 android_util_Binder.cpp
中 android_os_BinderProxy_transact
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 static jboolean android_os_BinderProxy_transact (JNIEnv* env, jobject obj, jint code, jobject dataObj, jobject replyObj, jint flags) { ...... Parcel* data = parcelForJavaObject (env, dataObj); ...... Parcel* reply = parcelForJavaObject (env, replyObj); ...... IBinder* target = (IBinder*) env->GetIntField (obj, gBinderProxyOffsets.mObject); ...... status_t err = target->transact (code, *data, reply, flags); ...... }
该函数从 BinderPorxy 对象中取出了之前在 native 层创建的 BpBinder 对象,这个对象的成员变量 mHandler 为0,即 ServiceManager 的远程引用,进入 BpBinder 的 transact 函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 status_t BpBinder::transact ( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { if (mAlive) { status_t status = IPCThreadState::self ()->transact ( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0 ; return status; } return DEAD_OBJECT; }
与Binder驱动程序的交互都在 IPCThreadState 中完成,进入 IPCThreadState
的 transact
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 status_t IPCThreadState::transact (int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { ...... if (err == NO_ERROR) { ...... err = writeTransactionData (BC_TRANSACTION, flags, handle, code, data, NULL ); } ...... if ((flags & TF_ONE_WAY) == 0 ) { ...... if (reply) { err = waitForResponse (reply); } else { ...... } ...... } else { ...... } return err; }
该函数主要执行了两个操作,一个是 writeTransactionData
,一个是 waitForResponse
,其中 writeTransactionData
的函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 status_t IPCThreadState::writeTransactionData (int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t * statusBuffer) { binder_transaction_data tr; tr.target.handle = handle; tr.code = code; tr.flags = binderFlags; tr.cookie = 0 ; tr.sender_pid = 0 ; tr.sender_euid = 0 ; const status_t err = data.errorCheck (); if (err == NO_ERROR) { tr.data_size = data.ipcDataSize (); tr.data.ptr.buffer = data.ipcData (); tr.offsets_size = data.ipcObjectsCount ()*sizeof (size_t ); tr.data.ptr.offsets = data.ipcObjects (); } else if (statusBuffer) { ...... } else { ...... } mOut.writeInt32 (cmd); mOut.write (&tr, sizeof (tr)); return NO_ERROR; }
该函数主要是初始化了一个 binder_transaction_data 结构体,将远程引用 handle、命令 *code 、数据 *data 写进这个结构体中,然后把这个结构体写进 mOut 缓冲区中,然后执行 waitForResponse
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 status_t IPCThreadState::waitForResponse (Parcel *reply, status_t *acquireResult) { int32_t cmd; int32_t err; while (1 ) { if ((err=talkWithDriver ()) < NO_ERROR) break ; ...... switch (cmd) { case BR_TRANSACTION_COMPLETE: ...... case BR_DEAD_REPLY: ...... case BR_FAILED_REPLY: ...... case BR_ACQUIRE_RESULT: ...... case BR_REPLY: ...... default : ...... } ...... } return err; }
该函数进入一个无限循环,主要调用了 talkWithDriver
与 Binder 驱动进行交互,talkWithDriver
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 status_t IPCThreadState::talkWithDriver (bool doReceive) { ...... binder_write_read bwr; const bool needRead = mIn.dataPosition () >= mIn.dataSize (); const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize () : 0 ; bwr.write_size = outAvail; bwr.write_buffer = (long unsigned int )mOut.data (); if (doReceive && needRead) { bwr.read_size = mIn.dataCapacity (); bwr.read_buffer = (long unsigned int )mIn.data (); } else { ...... } ...... if ((bwr.write_size == 0 ) && (bwr.read_size == 0 )) return NO_ERROR; bwr.write_consumed = 0 ; bwr.read_consumed = 0 ; status_t err; do { ...... if (ioctl (mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0 ) err = NO_ERROR; else err = -errno; } while (err == -EINTR); ...... return err; }
在该函数中,首先将 mOut 缓冲区的数据读出,然后进入 kernel 层执行 android_ioctl
函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 static long binder_ioctl (struct file *filp, unsigned int cmd, unsigned long arg) { switch (cmd) { case BINDER_WRITE_READ: { struct binder_write_read bwr ; ...... if (copy_from_user (&bwr, ubuf, sizeof (bwr))) { ret = -EFAULT; goto err; } ...... if (bwr.write_size > 0 ) { ret = binder_thread_write (proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed); trace_binder_write_done (ret); if (ret < 0 ) { bwr.read_consumed = 0 ; if (copy_to_user (ubuf, &bwr, sizeof (bwr))) ret = -EFAULT; goto err; } } if (bwr.read_size > 0 ) { ret = binder_thread_read (proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); trace_binder_read_done (ret); if (!list_empty (&proc->todo)) wake_up_interruptible (&proc->wait); if (ret < 0 ) { if (copy_to_user (ubuf, &bwr, sizeof (bwr))) ret = -EFAULT; goto err; } } ...... if (copy_to_user (ubuf, &bwr, sizeof (bwr))) { ret = -EFAULT; goto err; } break ; } ...... } }
在该函数中,先将数据从用户空间拷贝到内核空间中,如果 write_size 大于0,则先执行 binder_thread_write
操作,如果 read_size 大于0,则执行 binder_thread_read
操作,最后将结果拷贝回用户空间。
这里因为从 mOut 缓冲区读出数据,所以 write_size 大于0,进入 binder_thread_write
函数:
3. 关于 Messenger 和 AIDL 如果想自己实现跨进程通信,Android 提供了两种工具。
第一种是 Messenger 类,Android 官方建议如果你没有多线程处理需求时,最好使用 Messenger 类, Messenger 通信是基于 Handler 的消息处理,消息都在一个队列中,所以是单线程的。 Handler 内部实现了一个 IMessenger 接口,实际上这个 IMessenger 就是通过 .aidl
文件生成的。
第二种是 AIDL 语言,可以将你在 .aidl
文件中写的接口通过编译生成一个支持跨进程通信的 Binder 。
下面以一个简单的 .aidl
文件作为示例,看看是如何实现 Binder 通信的,ITest.aidl
:
1 2 3 4 5 6 package com.uitest.aidl;interface ITest { void set (int value) ; int get () ; }
该文件定义了一个接口 Itest ,定义了两个方法 get
和 set
,当编写完这个文件后,编译时生成(IDE工具在保存文件后会立即生成)一个名为 com.uitest.aidl.ITest
Java 类:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 package com.uitest.aidl;public interface ITest extends android .os .IInterface { public static abstract class Stub extends android .os .Binder implements com .uitest .aidl .ITest { private static final java.lang.String DESCRIPTOR = "com.uitest.aidl.ITest" ; public Stub () { this .attachInterface(this , DESCRIPTOR); } public static com.uitest.aidl.ITest asInterface (android.os.IBinder obj) { if ((obj == null )) { return null ; } android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR); if (((iin != null ) && (iin instanceof com.uitest.aidl.ITest))) { return ((com.uitest.aidl.ITest) iin); } return new com.uitest.aidl.ITest.Stub.Proxy(obj); } @Override public android.os.IBinder asBinder () { return this ; } @Override public boolean onTransact (int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException { switch (code) { case INTERFACE_TRANSACTION: { reply.writeString(DESCRIPTOR); return true ; } case TRANSACTION_set: { data.enforceInterface(DESCRIPTOR); int _arg0; _arg0 = data.readInt(); this .set(_arg0); reply.writeNoException(); return true ; } case TRANSACTION_get: { data.enforceInterface(DESCRIPTOR); int _result = this .get(); reply.writeNoException(); reply.writeInt(_result); return true ; } } return super .onTransact(code, data, reply, flags); } private static class Proxy implements com .uitest .aidl .ITest { private android.os.IBinder mRemote; Proxy(android.os.IBinder remote) { mRemote = remote; } @Override public android.os.IBinder asBinder () { return mRemote; } public java.lang.String getInterfaceDescriptor () { return DESCRIPTOR; } @Override public void set (int value) throws android.os.RemoteException { android.os.Parcel _data = android.os.Parcel.obtain(); android.os.Parcel _reply = android.os.Parcel.obtain(); try { _data.writeInterfaceToken(DESCRIPTOR); _data.writeInt(value); mRemote.transact(Stub.TRANSACTION_set, _data, _reply, 0 ); _reply.readException(); } finally { _reply.recycle(); _data.recycle(); } } @Override public int get () throws android.os.RemoteException { android.os.Parcel _data = android.os.Parcel.obtain(); android.os.Parcel _reply = android.os.Parcel.obtain(); int _result; try { _data.writeInterfaceToken(DESCRIPTOR); mRemote.transact(Stub.TRANSACTION_get, _data, _reply, 0 ); _reply.readException(); _result = _reply.readInt(); } finally { _reply.recycle(); _data.recycle(); } return _result; } } static final int TRANSACTION_set = (android.os.IBinder.FIRST_CALL_TRANSACTION + 0 ); static final int TRANSACTION_get = (android.os.IBinder.FIRST_CALL_TRANSACTION + 1 ); } public void set (int value) throws android.os.RemoteException ; public int get () throws android.os.RemoteException ; }
可以看到在 ITest 接口中,声明了一个静态抽象类 Stub*,类似于 Android 很多系统服务的 *xxxNative 类,还声明了一个静态类 Proxy*,和系统服务中 *xxxProxy 作用相同。
通常 AIDL 会配合 Service 来使用,在 Service 中实现 Stub 类,并把实现的对象传给绑定 Service 的客户端,客户端会在 ServiceConnection 中调用 Stub 的 asInterface
将服务端 Service 的远程引用包装成 Proxy 类。
调用 proxy 类的 get
或者 set
方法,流程和上文中一样,会进入 Binder 驱动唤醒处理线程,发起线程会进入休眠,在 Service 进程中就会有一条 Binder Thread 执行实现的方法,然后将结果通过 Binder 驱动告诉发起线程,最终结果返回到应用层。