PREV

Security Testing of Permission Re-delegation Vulnerabilities in Android Apps

Summary

The Android platform facilitates reuse of app functionalities by allowing an app to request an action from another app through inter-process communication mechanism. This feature is one of the reasons for the popularity of Android, but it also poses security risks to the end users because malicious, unprivileged apps could exploit this feature to make privileged apps perform privileged actions on behalf of them.

In this paper, we present a novel approach for precise detection of permission re-delegation vulnerabilities in Android apps. It is a hybrid approach that seamlessly and e ectively combines program analysis, test generation, natural language processing, machine learning techniques. Our approach rst clusters a large set of benign and non-vulnerable apps based on their similarities in terms of functional descriptions. For each cluster, it then infers permission re-delegation model of the cluster that characterizes common permission re-delegation behaviors of the apps in the cluster. Given an app under test, our approach checks whether it has permission re-delegation behaviors that deviate from the model of the cluster it belongs to. If that is the case, it generates test cases to detect the vulnerabilities.

p> We evaluated the vulnerability detection precision of our approach based on 1,258 o cial apps. We also compared with three static analysis-based approaches — FlowDroid, Covert, and IccTA — based on 595 open source apps. Our approach detected 30 vulnerable apps and produced zero false alarm; FlowDroid and IccTA did not detect any vulnerable app, and produced 8 and 15 false alarms, respectively; Covert detected one vulnerable app and produced 17 false alarms. Executable proof-of-concept attacks generated by our approach were reported to the corresponding app developers.

Prototype components

These prototype components can be used separately for different purposes.

Topic classification

Topic classification is done using Mallet. Though we used 30 topics, feel free to experiment with different number of topics (as the current Play Store also has more than 30 category)

Clustering

Clustering is done using Weka. Expectation Maximization (EM) algorithm was used.

Cluster assignment

Cluster assignment is done using Weka as a classification problem. Once we have assigned each training app to a cluster, whenever we have a new app to analyze (referred AUT), we perform classification to determine which cluster the app belongs to so that we can compare its behavior with apps in the corresponding cluster. The simple Naive Bayes classifier was used.

API Reachability Analysis

API reachability analysis is performed on the call graph in order to identify privileged APIs that can be reached from public entry point(s). A tool that analyzes an app and exports reachable sensitive APIs has been implemented and is available as JAR. The tool takes an APK as input and performs the reachability analysis and outputs statically reachable paths. This paths are later subject to test case generation that attempts to generate inputs that cover the reported paths.

Anomalies Identification

The anomaly identification is implemented in R and the script is available below.

Instrumentation

The app under test is instrumented to trace method execution in order to compute the GA fitness. This is a generic tool that inserts a statement that logs method calls such as A() -> B() meaning in method A, there is a call to method B. The log is also marked with tag that will be later easy to filter. This is also available as JAR.

Test case generation

Once outlier paths are available, test case is generated that executes these paths. A tool that uses Genetic Algorithm has been implemented and is available as JAR. This is also a generic tool that can generate inter-component communcation inputs (ICC) (or simply, intents). It can be used in different contex to generate ICC inputs, however, expects the app to be instrumented to log method invocations in [TAG] A() -> B() format

Download tool Download dataset

Usage

Prerequisites

In order to be able to classify apps, we first need to have a training data. Our training data consisits of 10K+ top Google Play apps. For the classification based on topics, we need to perform the topic classification for the training apps.

Once topic classification, clustering and cluster assignment is done using Malet and Weka, we precced with the following tools.

Modeling and outlier detection

In this section we present the script used to create a model and then use this model to find outliers.

To learn a model from a cluster apps, we use the following R script. It produces the frequency vectory and the cut-off/threshold. It expects the list of apps in a given cluster, the list of unique 'exposed' APIs in a cluster, output filename for the average frequency vector, output filename for the threshold/cutoff value and the static analysis result of each app in the cluster.

args = commandArgs(trailingOnly=TRUE)

if (length(args) < 5) {
  stop("Usage: apps_list.csv apis_in_cluster.csv ave_out_filename cutoff_filename cluster_files", 
  				call.=FALSE)
}

wd <- getwd()

tryCatch(apps <- read.table(paste(wd, args[1], sep="/"), header=FALSE, sep="\n"), 
					error=function(e) NULL)
tryCatch(apis <- read.csv(paste(wd, args[2], sep="/"), header=FALSE, sep = "\n"), 
					error=function(e) NULL)

apps.names <- apps
apis.name <- apis

apps.names$ID <-  1:nrow(apps)
apis.name$ID <- seq.int(nrow(apis))

data <- data.frame(matrix(0, nrow(apps), nrow(apis)))

files <- list.files(path=paste(wd, args[5], sep="/"))
curr_dir <- getwd()
setwd(paste(wd, args[5], sep="/"))

rownames(data) <- apps.names$ID
colnames(data) <- apis.name$ID

for (i in 1:length(files)) {
  file = files[i]
  d1 <- NULL
  tryCatch(d1 <- read.table(file, header=FALSE, quote="\"", sep="\n"), error=function(e) NULL)
  for (anApi in d1[,1]){
    id = apis.name$ID[apis.name==as.character(anApi)]
    data[i, id] <- 1 
    rm(id)
  }
  rm(anApi,d1,file)
}
rm(i)

average = rep(0, length(apis.name$ID))
total = average
for(i in 1:length(average)){
  average[i] = mean(data[,i])
  total[i] = sum(data[,i])
}
rm(i)

distances = rep(0, length(apps.names$ID))
for(i in 1:length(distances)){
  distance <- sqrt( sum((data[i,] - average) ^ 2) )
  distances[i] <- distance
  rm(distance)
}
rm(i)

bp <- boxplot(distances)

setwd(curr_dir)

# output average and cutoff distance
write.csv(average,paste(wd, args[3], sep="/"),row.names=FALSE)
write.csv(bp$stats[5,1], paste(wd, args[4], sep="/"), row.names=FALSE)
With this, we have a model of each cluster that we are interested in.

Once we created a model, assuming we already performed the cluster assignment (using Weka), we perform outlier detection. We use the following R script to do outlier detection.

The script expects the list of APIs used by the given app, the list of APIs in the given cluster, the average frequency vector for the cluster, the threshold for the cluster and output file.

args = commandArgs(trailingOnly=TRUE)

if (length(args) < 6) {
  stop("Usage: apps_api_usage.txt apis_in_cluster.csv average.txt cutoff.txt output_file", call.=FALSE)
}

wd <- getwd()

cutoff <- read.table(paste(wd, args[4], sep="/"), header=TRUE, quote="\"", sep="\n") # cutoff
average <- read.table(paste(wd, args[3], sep="/"), header=TRUE, quote="\"", sep="\n") # average
app <- args[1] # the app
apis <- read.table(paste(wd, args[2], sep="/"), header=FALSE, sep = "\n") # list of apis

output_file <- paste(wd, args[6], sep="/")

apps.names <- data.frame(app)
apis.name <- apis

apps.names$ID <-  1
apis.name$ID <- seq.int(nrow(apis))

data <- data.frame(matrix(0, nrow(apps.names), nrow(apis)))

rownames(data) <- apps.names$ID
colnames(data) <- apis.name$ID

newAPI=0
file = paste(wd, args[5], sep="/")#toString(apps.names[1,1])
d1 <- read.table(file, header=FALSE, sep="\n")
for (anApi in d1[,1]){
  id = apis.name$ID[apis.name==as.character(anApi)]
  if (length(data[1, id]) != 0) {
    data[1, id] <- 1 
  }
  
  rm(id)
}
rm(anApi,file)

averages = t(average)

distance <- sqrt( sum((data[1,] - averages) ^ 2) ) #tested
print(paste("Distance:",distance,"---cutoff:",cutoff,sep=" "))

if (distance < as.double(cutoff) && newAPI != 1) {  
  quit()
} else {  
  med = median(averages[1,])
  for (anApi in d1[,1]){
    # get the id of the api from the list
    id = apis.name$ID[apis.name==as.character(anApi)]
    # get the average value m_i of the api from the cluster
    mi = averages[1, as.numeric(id)]

    if (length(mi) == 0) {
      # maybe the API does not blong to the cluster
      write(as.character(anApi), output_file, append=TRUE)
      next
    }

    # is the api's popularity below the median? 
    if (mi <= med) {
     # write to file the api for genetic algorithm
      write(as.character(anApi), output_file, append=TRUE)
    }
  }  
}

This script produces the list of anomaluous APIs in the app. The following tools are provided as JAR file.

Instrumentation

Run the following command


# instrumatation 
java -jar instrumenter.jar input_dir/{$APK} ${platforms} output/${APK}
Where ${platforms} variable is set to the platform JAR files directory. You can find the different platform JARs either from Android SDK or from this repository. This will produce an instrumented app. Don't forget to sign and zipalign the instrumented APK in order to be able to install it on an emulator/device.

#signing the instrumented apk
cd signAPK
./signApk.sh ../output/${APK}
                                

This will sign the instrumented APK that is in the "output" directory.

API Reachablity Analysis

Run the following tool to extract the publicly reachable sensitive API

 # extracting sensive paths to sinks
java -jar CallGraphAnalyzer.jar input_dir/${APK} SourcesAndSinks.txt List_Of_All_Android_APIs.txt $platforms output/${APK}_sensitive_paths.txt
Where SourcesAndSinks.txt is just an empty file (reserved for data-flow analysis) while List_Of_All_Android_APIs.txt is the list of all Android APIs that require permission that can be downloaded from pscout. This will produce the list of path leading to reachable sensitive APIs.

Test case generation

Before running the following command, don't forget to have an emulator or a device running with the instrumented app to test already installed.

java -jar GAIntentGenerator.jar output/${APK}_sensitive_paths.txt ${APK} ${platforms} ${path_to_adb}
This attempts to prodoce inputs (test cases) that execute the paths. It is configured to use 150 initial population and 500 maximum evaluations.

NOTE: it is recommended to pass the original unmodified ${APK} instead of the one that is instrumented.

Cheetsheet

Here is a summary of the commands

 
# inside PREV directory

# ${platform} points to a directory containing different Android platform JAR files (e.g., android-8, android-9...)

java -jar instrumenter.jar input/org.lumicall.android_186.apk ${platform} output/

# ouput dir will have instrumented org.lumicall.android_186.apk file

cd signAPK
./signApk.sh ../output/org.lumicall.android_186.apk

cd ..

 # extracting sensive paths to sinks
java -jar CallGraphAnalyzer.jar input/org.lumicall.android_186.apk SourcesAndSinks.txt $platform jellybean_allmappings.txt output/org.lumicall.android_186_sensitive_paths.txt

 # at this point you do the outlier detection and select the paths that are intersting for test generation 
 # if you instead want to do test generation for all the paths in the app, you can continue with the following command with all the paths from the previous command 

 # install the app 
adb install output/org.lumicall.android_186.apk

 # create results directory for the GA tool to output intents as adb command and their fitnesses 
mkdir results

 # start the GA. Intents as ADB command and their fitness will be output in the results dir for each component 
java -jar GAIntentGenerator.jar output/org.lumicall.android_186_sensitive_paths.txt input/org.lumicall.android_186.apk ${platform} /Library/Android/sdk/platform-tools/adb