secure_programming
Table of Contents
Uninitialized Variables
Unchecked Return Values
- When developing an algorithm, developers tend to focus attention on the "good" path, the path that leads to the desired result.
- The failure paths are given lower priority or forgotten about.
- Effort is spent on getting the algorithm correct with the intent to go back and add error checking.
- Once the algorithm is working schedule/deadlines may make moving on to a new task seem like a good idea.
- Always checking for error conditions while developing the "good" path is distracting to the developer.
- Each time the developer encounters a potential failure point they are no longer thinking about the algorithm at hand but how to handle this error and whether or not it can be recovered gracefully or if the program should be aborted.
- Once the error condition is handled, the developer must "ramp up" again on where he left off on the "good" path.
- It becomes very tempting to put off error handling to later, at which point it is easily neglected.
If you are going to put off error handling, don't just put in a "TODO" comment and expect to come back to it. Instead, do something that will force you to fix it.
#define FAIL() \
do { \
fprintf(stderr, "abort! file: %s, line %d\n", __FILE__, __LINE__); \
abort(0); \
} while(0)
Then stub out your error checking:
if(foo() == ERROR) { FAIL(); // force immediate abort if this is never fixed. }
Function returns the same value for success or failure
Buffer Overflows
Memory Leaks
Memory Allocation
#ifdef CHECK_ALLOC #define MALLOC bad_malloc #else #define MALLOC malloc #endif #define FAIL_COUNT = 3; void* bad_malloc(size_t size) { static int fail = FAIL_COUNT; void* ret = NULL; if(--fail) ret = malloc(size); else fail = FAIL_COUNT; return ret; }
Heap Corruption
Electric Fence
$ gcc -o foo foo.c -lefence
Race Conditions
Code Coverage
$ gcc -ftest-coverage -fprofile-arcs foo.c $ gcov foo.c
Automated Tools
$ splint -I/inc *.c
secure_programming.txt · Last modified: 2023/08/18 18:15 by 127.0.0.1