View Byung-Gon Chun's profile on LinkedIn, the world's largest professional Yahoo! Research Silicon Valley. June – November 6 months. Main · Videos; Facing adverstiy going from dating to marriage online dating convertir de libras a kilos online dating byung gon chun yahoo dating byung gon . [REEF] Remove APIs with no dependencies deprecated since fro @dafrista Byung-Gon Chun. dafrista authored and Byung-Gon Chun committed.
Byung gon chun yahoo dating - Service – program committee member
Thorough validation of apps applied as part of the app market admission process has the potential to significantly enhance mobile device security. In this paper, we propose AppInspector, an automated security validation system that analyzes apps and generates reports of potential security and privacy violations.
We describe our vision for making smartphone apps more secure through automated validation and outline key challenges such as detecting and analyzing security and privacy violations, ensuring thorough test coverage, and scaling to large numbers of apps. Categories and Subject Descriptors D. The number of third-party applications or apps that the average smartphone user installs has grown rapidly , and browsing app stores has become a form of inexpensive entertainment for millions of people.
Apps are small programs that often provide their functionality by accessing sensitive data e. Ensuring that apps properly handle such high-value sensitive data is an important and difficult problem. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Recent incidences of malicious apps found in the Android Market show that smartphones are susceptible to the same kinds of malware that have long plagued the PC world .
Furthermore, many other apps, while lacking malicious intent, may unintentionally compromise sensitive data. Recent studies of the Android and iphone platforms [8, 19, 27] have found that apps frequently share private data in an undesirable way by leaking it to unknown destinations and third-party ad servers.
Similar to approaches for securing PCs, research on mobile device security has explored end-host system solutions such as scanning for viruses using signatures. Our prior work on TaintDroid  used taint tracking to detect unwanted exfiltration of sensitive data. It allows a user to track the propagation of sensitive data through and between apps and can raise an alert when sensitive data leaves the device.
However, by the time a leak has been detected and analyzed, it may be too late to protect the sensitive data. Other work explores running end-host replicas dubbed virtual smartphones or clones in the cloud to enable more powerful analysis [15, 25]. However, these approaches still rely on detecting malicious or abnormal behavior after apps have been installed and run on individual smartphones.
Fortunately, unlike in the PC world, we have a unique opportunity to improve the security of mobile applications thanks to the centralized nature of app distribution; users typically obtain apps through just a few popular app markets e. Applying security validation at the appmarket level offers a great opportunity to enhance mobile app security. However, app markets currently apply either limited manual validation or no validation at all.
The manual validation approach involves experts employed by an app market or a third-party security firm deciding whether to approve an app by manually exercising its functionality and observing its behavior. Unfortunately, experience with Apple s App Store approval process has demonstrated that this approach is less than ideal. Apple s approval process can introduce costly delays and uncertainty into the development cycle, while banned behavior such as WiFi-3G bridging  and alleged violators of Apple s privacy policies [3, 4] have still slipped into the App Store.
We believe that automated validation of smartphone apps at the app-market level is a promising approach for drastically improving the security of smartphones. We envision an automated validation process applied either by market providers to apps submitted for in- 2 clusion, or by a third-party market filter service that advises users about apps safety.
Alternatively, a market provider could offload the task of validating submitted apps to a third-party service. In this paper, we present a system that aims to achieve this goal.
At a high level, the system utilizes virtual smartphones running in the cloud to test and verify security properties of apps. By running virtual smartphones in parallel, we can analyze apps at a massive scale. AppInspector is an automated security testing and validation system that embodies this approach. We have identified several important challenges such as generating inputs that sufficiently explore an app s functionality, logging relevant events at multiple levels of abstraction as the app executes, and using these logs to accurately characterize an app s behavior.
Our exploration is preliminary and intended to initiate discussion on mobile app validation. The rest of this paper discusses each of these challenges in greater detail and proposes several promising techniques for addressing them.
SYSTEM OVERVIEW We envision a security validation system that 1 analyzes apps submitted to popular app markets, 2 identifies apps that exhibit malicious behavior and should be removed or avoided by users, and 3 facilitates producing easy-to-understand reports informing users of potential privacy risks due to misuse or abuse of sensitive data.
To analyze apps, we propose a dynamic approach that monitors an app s use of sensitive information and checks for suspicious behavior such as excessive resource consumption or deleting user data. In order to scale to hundreds of thousands of apps in a costeffective manner, this process must be automated as much as possible. However, it would be cost-prohibitive to test such a large number of apps on actual mobile devices.
Instead, we propose using commodity cloud infrastructure to emulate smartphones. This will enable a large-scale security validation service to be built at low cost by utilizing the cloud for computation. A single host may be capable of running multiple virtual device instances at once, and cloud-hosted validation will enable testing many apps in parallel.
Building such an app validation system presents three key challenges: How do we track and log sensitive information flows and actions to enable root cause analysis and application behavior profiling?
How do we identify security or privacy violations from collected logs and pinpoint the root cause and execution path that led to the violations? How do we traverse diverse code paths to ensure that analysis is thorough? In the rest of the paper, we give an overview of AppInspector, our proposed system to address these challenges. At a high level, the envisioned validation system consists of an AppInspector master, which creates multiple AppInspector nodes, each including a virtual smartphone.
Validation is massively parallel, and requires little coordination between tasks. The master coordinates scheduling validation tasks on the nodes. We outline the basic steps of a validation task, i. Figure 1 illustrates the major components of the system mentioned in the overview. AppInspector first installs and loads the app on a virtual smartphone.
AppInspector node architecture An input generator running on the host PC then injects user interface events and sensor input. The smartphone application runtime is augmented with an execution explorer that aids in traversing possible execution paths of the app. These two components address C3. While the app runs, an information-flow and action tracking component monitors privacy-sensitive information flows and generates logs, addressing C1.
Finally, to address C2, AppInspector provides security analysis tools which can be used after execution completes to interpret the logs and generate a report. In the following sections we further describe these four components. An additional goal is to help smartphone users better understand how all apps handle their privacysensitive information, to allow them to make informed decisions about which apps to install and use.
To this end, we must first define what we consider to be a security or privacy violation. A security violation occurs when an app performs an action beyond the permissions granted to the app at install-time by the underlying smartphone platform.
For example, if an app accesses sensitive data for which it is not granted permission, this is a clear security violation. Privacy violations can be more subtle. Because many apps collect sensitive information such as location or user identifiers in order to provide useful functionality, simply detecting a transmission of sensitive data is not sufficient to declare a privacy violation.
At a high level, a privacy violation occurs when an app releases sensitive data to a remote party in a way neither expected nor desired by the user. However, encoding user preference and expectations inside automated analysis is difficult. As a result, for the purposes of automated detection, we define a privacy violation as follows: Whether or not a disclosure is considered a privacy violation by a user will often depend on its purpose or intent as perceived by the user: In general, multiple components may be involved in causing a violation.
These may include the app itself, as well as third-party analytics and advertising libraries plugged in by the developer for monetization. However, we note that the involvement of third-party code is not necessary for a violation to occur.
The key asset that AppInspector aims to protect is users privacysensitive data. In order to detect leaks or disclosures and then iden- 3 tify the specific functionality or code component s involved in a leak or disclosure, we need to pinpoint the root cause and execution path that led to an outgoing network transmission containing sensitive data.
To support this kind of analysis, it is necessary to track both explicit flows, in which sensitive information propagates through the app, external libraries, and system components through direct data dependencies, as well as implicit flows, e. To track explicit flows of sensitive data, we apply system-wide dynamic taint analysis, or taint tracking [17, 23]. Taint tracking involves attaching a label to data at a sensitive source, such as an API call which returns location data, and propagating this label through program variables, IPC messages, and persistent storage, to detect when it reaches a sink such as an outgoing network transmission.
We take advantage of the fact that apps are often written primarily in interpreted code and executed by a virtual machine to simplify the implementation and reduce the runtime overhead of taint propagation as in TaintDroid . However, since our analysis is performed offline, we can take a step further to address some limitations of the system posed by its realtime operation.
In addition, we can also explore finer-grained taint tracking for native functions. Implicit flows leak sensitive information through program control flow.
For example, consider the following if-else statement: By watching the values of x and z, which are affected by the control flow, one can learn whether w is 0 or not. To detect such leaks via implicit flows, we can track control dependencies by creating control-dependency edges; e.
This can potentially result in overtainting, or labeling and propagating false dependencies. We note that tracking implicit flows accurately is a long-standing challenge and an active area of research. In addition to flows of sensitive data, we track and log actions performed by applications at multiple levels in the software stack.
Choosing which information to log and the logging granularity is an important decision which affects both the depth and quality of analysis that can be performed later as well as the runtime performance of the app under testing. While it is not critical for a system driven by automated input to achieve real-time performance, large performance overheads could affect the number of execution paths that can be explored as well as the computational cost.
With this in mind, we propose logging the following information: We believe that we can log these categories of information by instrumenting the application runtime and system libraries in a way that will not impose prohibitive performance or log volume overheads.
An abstraction that we believe will prove useful is dependency graphs, which illustrate the path from the event determined to be the root cause of a malicious use or a misuse of sensitive data, through the data and control flow of the app and potentially other system components, to an eventual network transmission flagged as containing sensitive data.
Dependency graphs are constructed once testing of an app completes using information collected during execution. On top of a dependency graph, we can perform analysis such as backward slicing, filtering, and aggregation.
Backward slicing traverses vertices that are causally dependent from sinks to sources. Filtering produces a filtered log of an execution by excluding instructions which are unrelated and unaffected by sensitive information. Finally, aggregation produces a summarized log of an execution that affects sensitive information. I have a Ph. I am looking for PhD students and postdoctoral researchers who are interested in working on big data and deep learning systems. A Neural Translation Approach.
EuroSys , April SysML Conference, February Retainable Evaluator Execution Framework. Runtime Optimization for Distributed Machine Learning. Collaborative Analytics for Data Silos. Characterizing Conversation Patterns in Reddit: HotOS , May Collecting, Organizing and Sharing Pins in Pinterest: Automatic Performance Prediction for Smartphone Applications. Secure Data Preservers for Web Services. Enforcing Lifetime For Sensitive Data.
Elastic Execution between Mobile Device and Cloud. Petros Maniatis, Byung-Gon Chun. Small Trusted Primitives for Dependable Systems. Operating Systems Review, January Byung-Gon Chun, Petros Maniatis.