Skip to main content

117 posts tagged with "software engineering"

View All Tags

10 ANDROID LIBRARIES EVERY DEVELOPER SHOULD KNOW

Published: · Last updated: · 3 min read
Appxiom Team
Mobile App Performance Experts

Android development can be complex and time-consuming, but luckily, there are many libraries available that can help make the process easier and faster. In this blog post, we'll explore 10 essential Android libraries that every developer should know and consider using in their projects.

  • Retrofit: Retrofit is a type-safe REST client for Android that makes it easy to retrieve and upload data to a server. It's a popular library that supports several data formats such as JSON, XML, and Protocol Buffers. With Retrofit, you can define API endpoints, request parameters, and response types in an interface, making it easy to create a robust REST client.

  • Glide: Glide is an image loading and caching library for Android that can load images from a variety of sources, including URLs, local files, and content providers. It's easy to use and can automatically scale and crop images to fit different device sizes and aspect ratios. Glide also provides advanced features like memory and disk caching, placeholder images, and animated GIF support.

  • Room: Room is an object-relational mapping (ORM) library that makes it easy to work with SQLite databases on Android. It provides an abstraction layer over raw SQL queries, allowing you to easily perform CRUD (create, read, update, delete) operations on database entities. Room also supports reactive programming with LiveData and RxJava, making it easy to create responsive UIs.

  • Dagger 2: Dagger 2 is a dependency injection (DI) library that helps manage the dependencies between different components of an Android app. It uses annotations to generate boilerplate code for injecting dependencies, making it easy to maintain a clean, modular architecture. Dagger 2 also supports compile-time validation of dependencies, reducing the risk of runtime errors.

  • OkHttp: OkHttp is an HTTP client library for Android that can handle both synchronous and asynchronous network requests. It provides a simple API for making requests and supports features like caching, authentication, and encryption. OkHttp is also highly customizable, allowing you to add interceptors, configure timeouts, and handle error responses.

  • Timber: Timber is a logging library for Android that makes it easy to debug and troubleshoot your app. It provides a simple API for logging messages with different levels of severity and can automatically tag log messages with useful information like the class name and line number. Timber also supports custom loggers, making it easy to integrate with third-party logging services.

  • Gson: Gson is a JSON serialization and deserialization library for Android that can convert JSON strings to Java objects and vice versa. It provides a simple API for defining custom serialization and deserialization rules and supports advanced features like nested objects, arrays, and polymorphic types. Gson can also handle malformed JSON input, making it robust and flexible.

  • Firebase: Firebase is a suite of tools and services for developing Android apps, which includes features like real-time database, authentication, hosting, cloud messaging along with support for serverless functions that run in response to events that are automatically scaled and managed.

  • Appxiom: Appxiom is a lightweight SDK that detects performance issues and bugs like memory leaks, ANR, Frozen Frames, screen load delays, crashes, network call failures, exceptions, function failures and more. The tool works seamlessly in development, testing and live phases.

  • Espresso: Espresso is a testing framework for Android that helps automate UI testing and ensure app quality. It provides a simple API for interacting with UI elements and simulating user actions like clicks

CONCURRENCY AND PARALLELISM IN DART AND HOW IT IS USED IN FLUTTER

Published: · Last updated: · 7 min read
Appxiom Team
Mobile App Performance Experts

Concurrency and parallelism are essential concepts in programming that allow developers to optimize application performance and enhance user experience. In Dart, the programming language used in developing Flutter apps, concurrency and parallelism can be achieved using various mechanisms such as Isolates, Futures, and Streams. In this blog, we will discuss the basics of concurrency and parallelism in Dart, and how they can be used to improve the performance of Flutter apps.

What is Concurrency?

Concurrency is the ability of a system to run multiple tasks or processes simultaneously.

Isolates in Dart

In Dart, concurrency can be achieved through Isolates, which are Dart's lightweight units of concurrency that run in their own memory space, have their own event loop, and do not share memory with other isolates.

Isolates can communicate with each other through message passing, which involves sending and receiving messages between isolates. Isolates are designed to be safe and isolate the app's code from errors or bugs that may occur in other isolates. This means that if an isolate crashes, it will not affect the rest of the app or other isolates.

Isolates can be used to perform CPU-bound or long-running operations without blocking the UI thread or main isolate. This is important in Flutter apps, where long-running operations can cause the app to become unresponsive and affect the user experience.

To create an isolate in Dart, we can use the Isolate.spawn() method, which takes a function to be executed in the isolate as its argument. Here is an example:

import 'dart:isolate';

void main() async {
final receivePort = ReceivePort();
await Isolate.spawn(isolateFunction, receivePort.sendPort);
receivePort.listen((message) => print('Received: $message'));
}

void isolateFunction(SendPort sendPort) {
sendPort.send('Hello from isolate!');
}

In this example, we create a new isolate using the Isolate.spawn() method and pass a function called isolateFunction to be executed in the isolate. The receivePort is used to receive messages sent from the isolate, and we listen for incoming messages using the listen() method. When the isolate sends a message using the sendPort.send() method, it is received by the receivePort, and we print the message to the console.

What is Parallelism?

Parallelism is the ability of a system to execute multiple tasks or processes simultaneously on multiple processors or cores. In Dart, parallelism can be achieved through asynchronous programming using Futures and Streams.

Futures in Dart

Futures in Dart represent a value that may not be available yet but will be at some point in the future. Futures can be used to perform asynchronous operations such as network requests, file I/O, and other long-running operations that do not block the UI thread.

To use a Future in Dart, we can create a new instance of the Future class and pass a function that returns the value of the Future as its argument. Here is an example:

void main() {
final future = Future(() => 'Hello, world!');
future.then((value) => print(value));
}

In this example, we create a new Future using the Future() constructor and pass a function that returns the value 'Hello, world!' as its argument. We then use the then() method to listen for the completion of the Future and print its value to the console.

Streams in Dart

Streams in Dart represent a sequence of values that can be asynchronously produced and consumed. Streams can be used to perform asynchronous operations that produce a series of values such as user input, sensor data, and other real-time data.

To use a Stream in Dart, we can create a new instance of the Stream class and pass a function that produces the values of the Stream as its argument. Here is an example:

import 'dart:async';

void main() {
final stream = Stream.periodic(Duration(seconds: 1), (value) => value);
stream.listen((value) => print(value));
}

In this example, we create a new Stream using the Stream.periodic() constructor and pass a function that produces the value of the Stream as its argument. The function returns the value of a counter that increments by one every second. We then use the listen() method to listen for the values produced by the Stream and print them to the console.

Concurrency and Parallelism in Flutter

In Flutter, concurrency and parallelism can be used to improve the performance of the app and enhance the user experience. Here are some examples of how concurrency and parallelism can be used in Flutter:

  • Performing long-running operations: Long-running operations such as network requests, file I/O, and database queries can be performed in isolates or using Futures to avoid blocking the UI thread and improve app performance.
import 'dart:async';
import 'package:flutter/material.dart';

class MyWidget extends StatelessWidget {
Future<String> fetchData() async {
// perform long-running operation
return 'Hello, world!';
}

@override
Widget build(BuildContext context) {
return FutureBuilder<String>(
future: fetchData(),
builder: (context, snapshot) {
if (snapshot.hasData) {
return Text(snapshot.data);
} else if (snapshot.hasError) {
return Text('${snapshot.error}');
}
return CircularProgressIndicator();
},
);
}
}

In this example, we use a Future to perform a long-running operation that returns the value 'Hello, world!'. We then use a FutureBuilder widget to display the value returned by the Future when it is available.

  • Handling real-time data: Real-time data such as user input and sensor data can be handled using Streams to provide a responsive user experience.
import 'dart:async';
import 'package:flutter/material.dart';

class MyWidget extends StatefulWidget {
@override
_MyWidgetState createState() => _MyWidgetState();
}

class _MyWidgetState extends State<MyWidget> {
final _streamController = StreamController<String>();

@override
void dispose() {
_streamController.close();
super.dispose();
}

@override
Widget build(BuildContext context) {
return StreamBuilder<String>(
stream: _streamController.stream,
builder: (context, snapshot) {
return TextField(
onChanged: (value) => _streamController.add(value),
decoration: InputDecoration(
hintText: 'Enter text',
labelText: 'Text',
),
);
},
);
}
}

In this example, we use a StreamController to handle user input from a TextField widget. We then use a StreamBuilder widget to listen for the values produced by the Stream and update the UI when new values are available.

  • Isolates are an excellent tool for providing concurrency in Flutter apps. They allow developers to perform computationally intensive operations in the background without blocking the main UI thread, which can improve the app's performance and responsiveness.
import 'dart:isolate';

import 'package:flutter/material.dart';

class MyWidget extends StatefulWidget {
@override
_MyWidgetState createState() => _MyWidgetState();
}

class _MyWidgetState extends State<MyWidget> {
String _result = '';

@override
void initState() {
super.initState();
_calculate();
}

void _calculate() async {
final receivePort = ReceivePort();
final isolate = await Isolate.spawn(_compute, receivePort.sendPort);

receivePort.listen((message) {
setState(() {
_result = 'Result: $message';
});
receivePort.close();
isolate.kill();
});
}

static void _compute(SendPort sendPort) {
// Do some expensive computation here...
final result = 42;
sendPort.send(result);
}

@override
Widget build(BuildContext context) {
return Center(
child: Text(_result),
);
}
}

In this example, we create a StatefulWidget called MyWidget. In the initState() method, we call the _calculate() method to perform some expensive computation in an isolate.

The _calculate() method creates a ReceivePort and spawns an isolate using the Isolate.spawn() method. We pass the sendPort of the ReceivePort to the _compute() function in the isolate.

In the _compute() function, we perform some expensive computation and send the result back to the main isolate using the sendPort.send() method.

In the receivePort.listen() callback, we update the _result variable with the computed result and call setState() to update the UI. We also close the ReceivePort and kill the isolate.

Finally, in the build() method, we display the computed result in a Text widget in the center of the screen.

Note that isolates cannot access the BuildContext object directly, so we cannot use Scaffold.of(context) or Navigator.of(context) inside an isolate. However, we can pass arguments to the _compute() function using the Isolate.spawn() method if needed.

Conclusion

Concurrency and parallelism are essential concepts in programming that can be used to optimize application performance and enhance user experience. In Dart, concurrency can be achieved using Isolates, while parallelism can be achieved using Futures and Streams.

In Flutter, concurrency and parallelism can be used to perform long-running operations, handle real-time data, and improve app performance. Understanding these concepts and how to use them in Flutter can help developers create fast and responsive apps that provide an excellent user experience.

ACHIEVING CONCURRENCY AND PARALLELISM IN KOTLIN USING THREADS AND COROUTINES.

Published: · Last updated: · 9 min read
Appxiom Team
Mobile App Performance Experts

Concurrency and parallelism are two essential concepts in software development that allow you to execute multiple tasks simultaneously. Although these terms are often used interchangeably, they are distinct concepts.

In this blog post, we will explore concurrency and parallelism in Kotlin and how to implement them using threads and coroutines with some code samples.

Concurrency vs. Parallelism

Concurrency refers to the ability of a program to execute multiple tasks simultaneously, regardless of whether they are running on different processors or not. It involves breaking up a task into smaller pieces and executing them independently of each other. However, concurrency does not guarantee that the tasks will be executed in parallel.

Parallelism, on the other hand, refers to the ability of a program to execute multiple tasks simultaneously using multiple processors. It involves breaking up a task into smaller pieces and distributing them across multiple processors for simultaneous execution.

Threads

Threads are the most basic mechanism for achieving concurrency in Kotlin. A thread is a lightweight unit of execution that can run concurrently with other threads within a program. Each thread can execute a separate task, allowing multiple tasks to be executed simultaneously.

Threads achieve concurrency by allowing multiple threads to run concurrently on a single CPU. The CPU switches between threads, allowing each thread to execute a portion of its code. This switching happens so fast that it appears as if all threads are running simultaneously.

Threads also enable parallelism by allowing multiple threads to run on separate CPUs. In this case, each thread is assigned to a different CPU core, allowing multiple threads to be executed simultaneously.

Threads are created using the Thread class, which takes a function or lambda expression as an argument. The function or lambda expression contains the code that the thread will execute. The following code snippet demonstrates how to create a new thread:

val thread = Thread {
// code to be executed in the thread
}

Once the thread is created, it can be started using the start() method. The start() method launches the thread and begins executing the code in the thread.

thread.start()

Threads can communicate with each other and share data using synchronization mechanisms like locks, semaphores, and monitors. However, this can be a challenging task, and incorrect synchronization can lead to race conditions and other concurrency bugs.

Coroutines

Coroutines are a more advanced mechanism for achieving concurrency and parallelism in Kotlin. Coroutines are lightweight, and they provide a more flexible and scalable approach to concurrency than threads. Coroutines enable asynchronous, non-blocking code execution, making them ideal for use cases like network programming or graphical user interfaces.

Coroutines achieve concurrency by allowing multiple coroutines to be executed on a single thread. This is possible because coroutines are cooperative, meaning that they suspend their execution voluntarily, allowing other coroutines to run. This cooperative nature enables a single thread to execute multiple coroutines simultaneously, resulting in highly efficient and performant code.

Coroutines also enable parallelism by allowing multiple coroutines to be executed on separate threads or even separate CPUs. This is achieved by using coroutines with different coroutine contexts, which specify the thread or threads on which the coroutine should execute.

Coroutines are created using the launch or async functions provided by the GlobalScope object in kotlinx.coroutines library. The launch function creates a new coroutine that runs in the background, while the async function creates a new coroutine that returns a result.

val job = GlobalScope.launch {
// code to be executed in the coroutine
}

val deferred = GlobalScope.async {
// code to be executed in the coroutine and return a result
}

Communicating between Coroutines using Channel

Coroutines can communicate with each other and share data using channels and suspending functions. Channels provide a way for coroutines to send and receive data asynchronously, while suspending functions enable coroutines to suspend their execution until a specific condition is met.

val channel = Channel<Int>()
val job = GlobalScope.launch {
for (i in 1..5) {
channel.send(i)
}
}

val deferred = GlobalScope.async {
var sum = 0
for (i in 1..5) {
sum += channel.receive()
}
sum
}

Coroutine Context

Coroutine context is a key concept in coroutines, and it provides a mechanism for managing the execution of coroutines. The coroutine context is a set of rules and properties that define how a coroutine should be executed. It includes information like the dispatcher, which specifies the thread or threads on which the coroutine should execute, and the job, which represents the lifecycle of the coroutine.

The dispatcher is responsible for assigning coroutines to threads. Different dispatchers are available, each with a different execution strategy. For example, the Dispatchers.Default dispatcher assigns coroutines to a thread pool, while the Dispatchers.IO dispatcher assigns coroutines to a pool of threads optimized for I/O operations.

The CoroutineContext interface represents a context for a coroutine, which includes information like the coroutine dispatcher and job. The coroutine context provides a way to control the execution of coroutines, including where they run, how they are executed, and how they are cancelled.

Let's explore how to use the coroutine context in Kotlin with a sample code.

import kotlinx.coroutines.*

fun main() = runBlocking<Unit> {
launch(Dispatchers.Default) {
println("Running in the Default dispatcher")
println("Current thread: ${Thread.currentThread().name}")
}

launch(Dispatchers.IO) {
println("Running in the IO dispatcher")
println("Current thread: ${Thread.currentThread().name}")
}

launch(newSingleThreadContext("MyThread")) {
println("Running in a single-threaded context")
println("Current thread: ${Thread.currentThread().name}")
}
}

In the above code, we are launching three coroutines with different dispatchers: Dispatchers.Default, Dispatchers.IO, and a new single-threaded context created with newSingleThreadContext("MyThread").

The runBlocking coroutine builder is used to create a scope where coroutines can be launched and executed synchronously. It is similar to the Thread.join() method in Java, which blocks the current thread until the specified thread completes.

When a coroutine is launched with a dispatcher, it is assigned to a thread pool managed by that dispatcher. In the above code, the first coroutine is launched with the Dispatchers.Default dispatcher, which assigns it to a thread pool optimized for CPU-bound tasks. The second coroutine is launched with the Dispatchers.IO dispatcher, which assigns it to a thread pool optimized for I/O-bound tasks. Finally, the third coroutine is launched with a new single-threaded context, which creates a new thread on which the coroutine runs.

When the coroutines run, they print out a message indicating which dispatcher or context they are running in, as well as the name of the current thread. The output might look something like this:

Running in the IO dispatcher
Current thread: DefaultDispatcher-worker-1
Running in a single-threaded context
Current thread: MyThread
Running in the Default dispatcher
Current thread: DefaultDispatcher-worker-2

In this example, we can see that the coroutines are running on different threads depending on the dispatcher or context they are launched with.

Example: Downloading Images

Using Thread

import java.net.URL

fun main() {
val urls = listOf(
"https://example.com/image1.jpg",
"https://example.com/image2.jpg",
"https://example.com/image3.jpg",
)
val threads = urls.map {
Thread {
val url = URL(it)
val stream = url.openStream()
// Code to process the downloaded image
}
}
threads.forEach { it.start() }
threads.forEach { it.join() }
}

In the code above, we define a list of URLs and use the map() function to create a list of threads that download each image in parallel. We then start each thread and wait for them to finish using the join() function.

Using Coroutine

import kotlinx.coroutines.*
import java.net.URL

fun main() = runBlocking {
val urls = listOf(
"https://example.com/image1.jpg",
"https://example.com/image2.jpg",
"https://example.com/image3.jpg",
)
val deferred = urls.map {
async {
val url = URL(it)
val stream = url.openStream()
// Code to process the downloaded image
}
}
deferred.awaitAll()
}

In the code above, we define a list of URLs and use the map() function to create a list of coroutines that download each image in parallel. We then wait for all the coroutines to finish using the awaitAll() function.

Comparison: Thread vs Coroutine

When comparing coroutines and threads in Kotlin, there are several factors to consider that can affect performance. Here are some of the key differences between coroutines and threads in terms of performance:

  • Memory usage: Coroutines typically use less memory than threads because they are not tied to a specific thread and can reuse threads from a thread pool. This means that coroutines can potentially support a larger number of concurrent tasks without running out of memory.

  • Context switching: Context switching is the process of switching between different threads or coroutines. Context switching can be a performance bottleneck, as it involves saving and restoring the state of the thread or coroutine. Coroutines typically have a lower context switching overhead than threads because they use cooperative multitasking, where the coroutine decides when to suspend and resume execution, rather than relying on the operating system to schedule threads.

  • Scheduling: Coroutines are scheduled by a coroutine dispatcher, which determines which coroutine runs on which thread. This allows for more fine-grained control over how coroutines are executed and can improve performance by minimizing the number of context switches. Threads, on the other hand, are scheduled by the operating system, which can result in less control over scheduling and potentially more context switching.

  • Scalability: Coroutines can be more scalable than threads because they can be launched and cancelled more quickly, allowing for more dynamic allocation of resources. Coroutines can also be used with non-blocking I/O libraries, which can improve scalability by reducing the number of threads needed to handle I/O operations.

In general, coroutines can provide better performance for concurrent and asynchronous tasks due to their lower memory usage, lower context switching overhead, and more fine-grained control over scheduling.

Conclusion

In summary, concurrency and parallelism are essential concepts in software development, and Kotlin provides two mechanisms for achieving these goals: threads and coroutines. Threads achieve concurrency and parallelism by allowing multiple threads to run on a single or multiple CPUs. Coroutines achieve concurrency and parallelism by allowing multiple coroutines to be executed on a single or multiple threads, with each coroutine being cooperative and suspending its execution voluntarily.

With a solid understanding of threads and coroutines, developers can write highly efficient and performant applications that can execute multiple tasks simultaneously.

WHY MOBILE APP TESTERS AND DEVELOPERS SHOULD USE APM TOOLS FOR PERFORMANCE MONITORING.

Published: · Last updated: · 2 min read
Appxiom Team
Mobile App Performance Experts

Performance monitoring and continues bug monitoring are critical parts of the Mobile App development lifecycle. As mobile devices become more powerful and users expect more from their apps, it is essential to ensure that apps are performing well and are free of bugs. One way to achieve this is by using Application Performance Management (APM) tools.

APM tools are designed to help mobile app testers and developers detect and diagnose performance issues and bugs in their apps. These tools provide a wide range of information about an app's performance, including memory usage, CPU usage, network activity, and more. This information can be used to identify bottlenecks, memory leaks, and other issues that can negatively impact an app's performance.

One of the main benefits of using APM tools is that they can help app developers and testers find and fix performance issues before they become a problem for users. By identifying issues early in the development process, teams can make changes to improve performance and ensure that the app is stable and reliable. This can help reduce the number of crashes and improve the overall user experience.

Another benefit of using APM tools is that they can help developers and testers understand how users are interacting with their apps. This can be especially useful for understanding how different user segments are interacting with the app, which can help teams optimize the user experience and make improvements that will have the biggest impact.

In short, APM tools are an essential tool for mobile app testers and developers. They help teams identify and fix performance issues and bugs, improve the user experience, and ensure that apps are stable and reliable. By using APM tools, teams can deliver better quality apps and create a more positive user experience.

Visit appxiom.com to know more about how Appxiom can help you with monitoring performance and bugs in mobile apps.

HOW TO DETECT APP HANGS IN IOS APPS, AND FIX THEM.

Published: · Last updated: · 4 min read
Don Peter
Cofounder and CTO, Appxiom

One major objective for any app developer is to make sure that their iOS app is smooth and responsive. So, how does one make sure the app is responsive? The rule of thumb is that an application should be able to react to user inputs and touches within 250ms.

If the response time is above 250 ms then the delay becomes apparent and will be noticeable to the app user. iOS documentation categorizes any app response delay that persists for more than 250 ms as App hang.

What developers need is a proper way to identify and fix App hangs. XCode organizer and bug reporting tools like MetricKit and Appxiom report App hangs. Reports generated by xcode organizer and MetricKit are not real-time and they also miss App Hangs at times.

Appxiom reports App hangs that occur in the application in real-time and supports development, Test Flight and App Store builds. Because the reports are real-time, most App Hang situations will be captured.

Analyzing an App Hang Report

Here are some of the metrics needed for deeper analysis from an App hang report generated by Appxiom,

  • Top iOS versions affected.

  • Top device models affected.

  • First app version where the issue is detected.

  • Number of occurrences of the issue.

  • Total time for which the app was unresponsive.

  • Stack trace to identify the point of hang.

Appxiom provides App hang issue reports with these metrics.

Top OS versions, Device models and countries where App hang was reportedNumber of Devices and total occurrence count of App hang issuePrevious app versions, if any, where the same App hang issue was reportedIn most cases the stack trace provided with the issue should give indication of where the root cause lies.

Stack-trace indicating the point of origin of App hang issueThere could be situations where stack trace alone might not be sufficient. This is when the rest of the metrics help.

Look for patterns like the top OS versions, device models and the app version where this issue first occurred in the issue report. Occurrences count of the issue will provide a sense of how severe the issue is. Using the combination of these, developers will be able to prioritize and create the test device configuration to reproduce the issue.

Next step is to retrace how the user interacted with the app when the issue occurred. Activity trail, which is a chronologically ordered list of events that happened prior to the App hang which is provided with each App hang issue report in Appxiom dashboard. It will have events like the lifecycle of the different viewcontrollers, network changes and state changes of the app along with custom events that are set by the developer.

Activity trail listing events that occurred prior to App hang issue.This will help developers to retrace the app events to identify and reproduce the issue.

How to avoid App Hang situations.

Now that we know how to detect and fix App hangs, let us explore ways to prevent App hangs.

In order to achieve this the main thread of the app should be free from running intensive operations as the application gets frozen when the main thread is stuck.

Now what this means is, execute only tasks that interact with the user interface in the main thread i.e UIKit, AppKit, or SwiftUI based tasks. So how does one execute other long running tasks? Use background threads directly or either through the operation queue or through the Grand Central Dispatch queue to execute such operations.

Let us take the example of a simple HTTP(s) request and using its response data to update a tableview in the app. It is not just the HTTP(s) request that we execute in the separate thread, but de-serializing the JSON object from the response data should be executed outside of the main thread as well. Only the actual tableview update code needs to be executed in the main thread.

Following this coding practice will lower the possibility of App hang issues in your application.

Appxiom is available for Objective-C and Swift in iOS and watchOS, and for Java and Kotlin in Android. Click here to get started.

TIPS FROM CHATGPT FOR REDUCING BUG FIXING TIME.

Published: · Last updated: · 2 min read
Appxiom Team
Mobile App Performance Experts

ChatGPT is brilliant, and awesome. I asked what I should do to reduce bug fixing time. This is the reply 👇.

  •  There are several ways that you can reduce the time required to fix bugs, including the following:

Use automated testing tools: Automated testing tools can help to quickly identify and diagnose bugs, allowing developers to focus their efforts on fixing the issues rather than manually testing for them.

  • Prioritize bug fixes: Prioritizing the most important or severe bugs can help to ensure that the most critical issues are addressed first, saving time and effort in the long run.

  • Use debugging tools: Debugging tools can help to quickly identify the source of a bug, allowing developers to fix the issue more efficiently.

  • Collaborate and communicate: Effective collaboration and communication among team members can help to identify and resolve bugs more quickly, saving time and effort.

  • Adopt agile development methodologies: Agile development methodologies, such as Scrum and Kanban, can help teams to quickly identify and fix bugs, as well as adapt to changing requirements and priorities.

Overall, using a combination of these strategies can help to reduce the time and effort required to fix bugs.

 I'm like .. Impressed !!!

Clear, and to the point.

Do as ChatGPT suggests, and you will indeed reduce bug fixing time.

On a side note, Appxiom helps you with first 3 points, and 4th point to an extend.

Now that you are here, I recommend to you this blog post. How to reduce bug fixing time in mobile apps.

HOW TO REDUCE BUG FIXING TIME IN MOBILE APPS.

Published: · Last updated: · 2 min read
Appxiom Team
Mobile App Performance Experts

Bugs in physical world are beautiful. But in digital world, bugs are unwanted, but they do come uninvited.

We hate bugs in our apps because we need to put extra efforts to fix them.

Let's look at some basic numbers.

A ballpark optimistic estimate of the effort involved in fixing a medium severity bug is as follows.

Activity

Hours

​Collecting data

1.5

Reproducing the bug

0.5

Coding the fix

0.5

Testing

0.5

Total

3

If the effort required can be cut by 1/3rd, for each bug we will save one hour.

With a tool which can auto-detect and report bugs, the data collection can be cut short to 30 minutes from 90 minutes. That's a 1/3rd cut in total effort.

Effort saved => One hour.

If the time required to reproduce the bug and test the fix are reduced by 15 minutes each, that will be half an hour saved. For now let's not add that into this calculation.

Even with a very low estimate of 10 medium severity bugs per month, and with the saving of one hour in data collection alone, we will be saving 10 man-hours. This is equivalent to 1.25 man-days, assuming one man-day is equal to 8 man-hours. So we will save more than one man-day every month.

The actual can only go north by many multiples as the numbers considered here are much below realistic, and that's what we gather from our customers.

Try Appxiom to reduce bug fixing time in Android and iOS apps. Visit https://appxiom.com for more details.

Happy Bug Fixing.

THE ADVANTAGES AND DISADVANTAGES OF DEFENSIVE PROGRAMMING, AND HOW APPXIOM FINDS THE BALANCE.

Published: · Last updated: · 3 min read
Appxiom Team
Mobile App Performance Experts

While architecting any software solution, it’s important to focus on below three principles.

  • Incoming data is impure unless proven otherwise.

  • All data is important unless proven otherwise.

  • All code is insecure unless proven otherwise.

To incorporate these principles while building software applications, programmers tend to rely almost entirely on defensive programming, as if it is a divine universal solution. So wrapping entire code in multiple layers of try / catch blocks, verifying the same data at multiple points in the call flow, keeping unused data in memory because ‘all data is important’, and ignoring the 'proven otherwise' part of the principles, are common features in codebase these days. Throwing and catching Exceptions, as the name indicates, are to handle 'exceptions', and using them all over the place is an expensive bad idea.

I (almost) hate defensive programming the way it is done today.

No, I am not at all advocating not to take precautions against failure scenarios and potential security loopholes. We should. But over doing the precautionary measures do have a negative impact on the performance. Many programmers end-up doing exactly that in the name of defensive programming, and screwing up the performance as a result.

Another problem is that the defensive coding techniques end up preventing some bugs to manifest, and such bugs go unnoticed. The bugs will continue to exist, but they are not conspicuous enough. They may end up manipulating the end result, or affect the performance, but still go unnoticed. Even security bugs, though might get prevented from executing, would continue to exist as potential threats in the code. As the developer remains unware of the existence of these bugs, they go unfixed.

Of course, defensive programming has its advantages too. It helps the application much in gracefully handling unpleasant situations arising due to bugs. Also it helps in writing cleaner logging code.

So I very well understand why programmers tend to go for it.

How to make use of defensive programming while still enabling active reporting of bugs and errors? This has been one of the thought processes when we started building Appxiom for detecting and reporting bugs in mobile apps. The bugs have to be reported with as much data points as possible. And there should be a call back mechanism when bugs occur so that the developer can gracefully handle the situation. The entire process should happen in a resource efficient manner utilising less of CPU and Memory. Appxiom is architected based on these concepts.

The way Appxiom is architected helps mobile app developers to handle the buggy situation and also get notified as and when they occur. So they get the advantage of defensive programming and also get enough information to reproduce and fix the bugs.

If you are an android app developer, and use Java or Kotlin, or if you are an iOS developer, and if you use Objective-C or Swift, do check appxiom.com.Happy bug fixing.

PRODUCT ARCHITECTURE FAILURE - STORY OF HOW I MADE ONE, AND LOOKED STUPID.

Published: · Last updated: · 3 min read
Don Peter
Cofounder and CTO, Appxiom

Friends, I would like to share an incident where I failed as a programmer.

I got my first freelance project

After my college days, I began working as a freelance developer. I got my first project from my LinkedIn network. I was in charge of developing both the mobile app, and the backend. The core feature of the app was implemented with a periodic data fetch from the server and displaying it to the user.

And the dreaded moment came knocking

A month of development work, and the application went live. We got some traction. And negative comments started appearing in the Play Store. A good number of users said they were not able to view the data !!!

Tested the app multiple times only to find out that it was working as per the expected workflow. This made us scratch our heads as feedback suggested our core feature was failing. Client started yelling at me during the project calls.

Many days of testing and in-depth analysis, we concluded that the data loss would have occurred due to some raise-condition in fetching and deleting data. The client had suggested removing the data from the server after it was consumed by the mobile app. I communicated the same to the server team and they made sure data is removed as soon as the data is retrieved by the app.

So this is what was happening. App sends a GET request to fetch data. After sending the response, the server then goes ahead and deletes the data. As per plan, yes !!! Now, if there is a network issue or any delay in receiving the response, the app sends the GET request again only to find the data is already deleted.

The app lost customers, and our client dropped the project before we could fix it.

Years passed, And I became a better developer

As time progressed, I started working on multiple programming languages, read RFC specifications and got experienced on software architectural patterns. Working on different products at scale enabled me to understand the need to properly architect a product. As I began to learn more about API design, It made me rethink what actually went wrong in my project.

The first mistake that I made was not recognizing the lifecycle of data that is to be deleted. Once the app receives data from the server, the data should be deleted only after the app sends a confirmation back to the server. The second mistake was that I executed multiple tasks in a single API call. Here, a single HTTP GET call was used to fetch and delete the data at the same time. Using GET requests to delete data goes against the intention of GET in HTTP spec. The app would have functioned smoothly and data loss would not have occurred if one HTTP call was assigned to fetch the data and another call was made to delete the data.

This is just one among the many instances where poor product architecture messed up the product.

Why am I writing this post? Because I see some of the product companies rushing through product development without focusing on the architecture. Spending quality time to understand the lifecycle of data and architecting the solution accordingly helps much. As a developer it will save you from embarrassment.

SOFTWARE BUGS - SOME HISTORY

Published: · Last updated: · 2 min read
Robin Alex Panicker
Cofounder and CPO, Appxiom

That computer programs could have errors is a thought as old as computers. In a note dated 1843, Countess Ada Lovelace, world's first computer programmer, explained how Charles Babbage's Analytical engine could generate wrong output not because of any mistake with the device itself, but because it could be given wrong instructions. No wonder one of the most common and important features in programming languages is 'error handling'.

The first ever written reference, as available today, of 'bugs' is in a letter written by Thomas Alva Edison to his colleague in 1878. No, not the living 'bugs'. This is about 'bugs' as in 'failures', 'errors', unexpected results' in the non-living world of atoms and bits.

Edison writes ...

It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise—this thing gives out and [it is] then that "Bugs"—as such little faults and difficulties are called—show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached.Those days it meant mechanical errors and problems. First time the term was used in computer domain was in 1947. This was when Grace Hopper reported the root cause of a problem in the electromechanical computer Harvard Mark II to the presence of a moth trapped in a relay. Soon the word entered the common lexicon of computer engineers. Initially the term was used for hardware problems.

But it was the software engineers who took 'bugs' to its current popularity. Now we even have a whole industry built around software bugs. This includes bug detection, tracking, resolution, testing and so on. The objective is to provide the end users with a clean experience by early detection and fixing of bugs. The interesting fact is, as more bugs get fixed, even more bugs manifest.

Team Appxiom is happy and proud to be part of this 'bug' industry.

SECURING REMOTE WORK

Published: · Last updated: · 3 min read
Don Peter
Cofounder and CTO, Appxiom

Ever since the pandemic began, most developers have been practising the art of remote working. While we continue to enjoy the flexibility to work in our own times, at least some of us tend to overlook the fact that our home systems can be vulnerable to cyber threats. So I thought I should share some security measures that I took as part of the remote working policy of my company Appxiom.

Check-in code frequently

My laptop failed one not so fine morning. Luckily for me I was able to get on to a new machine in no time, thanks to my habit of frequent code push to Git. This habit helps. Do push the code as frequently as possible.

Keeping machine software up to date

By being outside of the office’s secure network, keeping all softwares in your devices up to date is important, which otherwise would expose our laptops to potential attacks.

AWS Virtual workspaces

I started using their desktop-as-a-service (DaaS) solution Amazon Workspaces, and my team is also moving towards that. It provides on demand secure desktop terminals on a hourly or monthly basis. The best part of using Amazon Workspaces was that it provided me with machines that were configurable for specific tasks. We used their service for our Android and Web development activities.

Updating default passwords

Despite having all the necessary software and hardware firewalls installed on our machines, many forget to change the default passwords. I changed default passwords of all my services and devices, including my home wifi router. This might feel like a silly thing to do. Recently there was an incident at Nissan which proved otherwise.

Using Multi-Factor Authentication (MFA)

I further stepped up the security on my digital assets, by enabling the two-factor authentication option wherever possible, so that I will have an extra layer of security in the worst-case scenarios.

Ensuring a secure WiFi connection

I have always relied on either my home WiFi connection or 4G dongle for my internet needs. In case of unavoidable travels I carry my dongle, and make sure not to use public WiFi connections. Because you never know who controls those access points.

We developers are better prepared for a pandemic like COVID-19. Transitioning into a more technology-enabled new normal was a major challenge that probably everyone faced last year. While the pandemic disrupted the functioning of most other domains, software development more or less remained the same. The reason was that our work was well suited for flexible remote work environment and we developers were very much aware of available technology and collaboration tools.

We need to be better prepared to face the coming age of cyber threats while working in a remote working environment. After all, technology is evolving and so are cyber threats.

HOW NOT TO BE A STUPID SOFTWARE ENGINEER.

Published: · Last updated: · 4 min read
Robin Alex Panicker
Cofounder and CPO, Appxiom

Source code of most of the applications used internally by a global company got leaked out a month back. Reason, someone there was careless and stupid enough to leave the default account credentials of the code repository as, wait, 'admin'/'admin' !!!

And one of the leaked applications was a data analysis tool that analysed prices of their products. How did they do that? By scrapping a public website owned by them !!! It's like stealing from one's own bank account, is it not?

I guess the global major is still figuring out how to clean up the mess.

A bank wrongly paid out $900M to lenders on behalf of their client. Blame is on the bad UI / UX of the software that was used by bank's staff which resulted in this transaction !!! Court ruled that the bank cannot get back the money. That means the bank cannot claim back the disputed $500M of that $900M. Judge even called the incident as biggest blunders in banking history.

There was a news report couple of years back that the combined wealth wiped out because of failed digital transformation efforts in global majors is north of $900B !!!

While naming any of these companies is out of scope of this blog and hence avoided, all the above mentioned are widely reported and can be googled.

Facepalm moments, are not these? Well, listen. These are situations caused by software engineers like you and me. Why? Because many times we fail to apply common sense. We tend to ignore warnings. We bypass important processes. All because we are too confident of ourselves. Confidence bordering on megalomania. How can we the experienced be wrong, right? And the casualty is quality of the applications we created.

That brings us to the crux of this blog post. What would it take to build better quality software ?

In my opinion there are three human qualities that helps software engineers to build better software, and all three are non-technology factors.

[Alert: This may sound too philosophical for some.]

1. Humility

Being humble is a virtue, and that we all know. Being humble also helps us to deliver quality. One reason we software engineers tend to overlook potential bugs and bypass processes is because of "we know how it works" attitude. One may be humble as a person, but if our know how about technology gets into our head, we will lose our humility when it comes to our work. We will refuse to unlearn and learn. Result, our deliverables will suffer on quality.

2. Patience

Rushing through will not help. We need to ensure reasonable speed, and reasonable speed only. Any thing more than that will only increase the chance of we becoming careless. Whether it’s architecting a solution, or writing code, or testing, or deploying a solution, make sure we are at the right speed of execution. We should tick all the checklist items and never bypass processes. That time is well spent.

3. Logical reasoning

Logical reasoning is a function of one’s ability to apply common sense at the right time. We need to be clear about the logic behind every action of ours, and should be able to explain why we did what we did. When following processes defined by someone else, make sure we understand what it’s about and why we are following that process. Everything that we do should have a logical reason. This clarity will help in ensuring quality in whatever we do.

Make no mistake, all the above three qualities sounds easy, but they are not. It requires much effort to imbibe these qualities and apply them in our work. But we need to. That will avoid facepalm moments for us and for our clients.

Your thoughts?