linux no such job(Linux就该这么学)
大家好,今天给各位分享linux no such job的一些知识,其中也会对Linux就该这么学进行解释,文章篇幅可能偏长,如果能碰巧解决你现在面临的问题,别忘了关注本站,现在就马上开始吧!
linux 中的xinetd是做什么用的
1.什么是xinetd
extended internet daemon
xinetd是新一代的网络守护进程服务程序,又叫超级Internet服务器,常用来管理多种轻量级Internet服务。
xinetd提供类似于inetd+tcp_wrapper的功能,但是更加强大和安全。
2. xinetd的特色
1)强大的存取控制功能
—内置对恶意用户和善意用户的差别待遇设定。
—使用libwrap支持,其效能更甚于tcpd。
—可以限制连接的等级,基于主机的连接数和基于服务的连接数。
—设置特定的连接时间。
—将某个服务设置到特定的主机以提供服务。
2)有效防止DoS攻击
—可以限制连接的等级。
—可以限制一个主机的最大连接数,从而防止某个主机独占某个服务。
—可以限制日志文件的大小,防止磁盘空间被填满。
3)强大的日志功能
—可以为每一个服务就syslog设定日志等级。
—如果不使用syslog,也可以为每个服务建立日志文件。
—可以记录请求的起止时间以决定对方的访问时间。
—可以记录试图非法访问的请求。
4)转向功能
可以将客户端的请求转发到另一台主机去处理。
5)支持IPv6
xinetd自xinetd 2.1.8.8pre*起的版本就支持IPv6,可以通过在./configure脚本中使用with-inet6 capability选项来完成。
注意,要使这个生效,核心和网络必须支持IPv6。IPv4仍然被支持。
6)与客户端的交互功能
无论客户端请求是否成功,xinetd都会有提示告知连接状态。
3. Xinetd的缺点
当前最大的缺点是对RPC支持的不稳定,但是可以启动protmap,使它与xinetd共存来解决这个问题。
4使用xinetd启动守护进程
原则上任何系统服务都可以使用xinetd,然而最适合的应该是那些常用的网络服务,同时,这个服务的请求数目和频繁程度不会太高。
像DNS和Apache就不适合采用这种方式,而像FTP、Telnet、SSH等就适合使用xinetd模式。
系统默认使用xinetd的服务可以分为如下几类:
①标准Internet服务:telnet、ftp。
②信息服务:finger、netstat、systat。
③邮件服务:imap、imaps、pop2、pop3、pops。
④ RPC服务:rquotad、rstatd、rusersd、sprayd、walld。
⑤ BSD服务:comsat、exec、login、ntalk、shell、talk。
⑥内部服务:chargen、daytime、echo、servers、services、time。
⑦安全服务:irc。
⑧其他服务:name、tftp、uucp。
具体可以使用xinetd的服务在/etc/services文件中指出。
这个文件的节选内容:
#/etc/services:
#$Id: services,v 1.40 2004/09/23 05:45:18 notting Exp$
# service-name port/protocol [aliases...] [# comment]
tcpmux 1/tcp# TCP port service multiplexer
tcpmux 1/udp# TCP port service multiplexer
rje 5/tcp# Remote Job Entry
rje 5/udp# Remote Job Entry
echo 7/tcp
echo 7/udp
discard 9/tcp sink null
discard 9/udp sink null
………
Internet网络服务文件中,记录网络服务名和它们对应使用的端口号及协议。文件中的每一行对应一种服务,它由4个字段组成,中间用Tab键或空格键分隔,分别表示“服务名称”、“使用端口”、“协议名称”及“别名”。在一般情况下,不要修改该文件的内容,因为这些设置都是Internet标准的设置。一旦修改,可能会造成系统冲突,使用户无法正常访问资源。Linux系统的端口号的范围为0~65 535,不同范围的端口号有不同的意义。
— 0:不使用。
— 1~1 023:系统保留,只能由root用户使用。
— 1 024~4 999:由客户端程序自由分配。
— 5 000~65 535:由服务器程序自由分配。
5.解读/etc/xinetd.conf和/etc/xinetd.d/*(启动关闭)
安装xinetd包(原因是我想重启xinetd的时候发现xinetd是未注册的服务):
提示找不多xinetd服务,结果如下:
[root@linuxzgf~]# service xinetd restart
xinetd:未被识别的服务
[root@linuxzgf~]# service xinetd reload
xinetd:未被识别的服务
[root@linuxzgf~]#
[root@linuxzgf~]# service xinetd restart
xinetd: unrecognized service
怎么安装呢?so easy。。。。。在linux安装镜像里有这个包,挂载上安装镜像,进入Server目录,找到包:
[root@localhost Server]# find-name'xinet*'
./xinetd-2.3.14-10.el5.i386.rpm
[root@localhost Server]# rpm-ivh xinetd*
warning: xinetd-2.3.14-10.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...########################################### [100%]
1:xinetd########################################### [100%
[root@localhost xinetd.d]# service xinetd restart
Stopping xinetd: [FAILED]
Starting xinetd: [ OK ]
[root@localhost xinetd.d]#
现在就安装并重启完成了xinetd服务。
或者使用如下命令重启:
#/etc/init.d/xinetd restart
1)/etc/xinetd.conf
xinetd的配置文件是/etc/xinetd.conf,但是它只包括几个默认值及/etc/xinetd.d目录中的配置文件。如果要启用或禁用某项 xinetd服务,编辑位于/etc/xinetd.d目录中的配置文件。例如,disable属性被设为yes,表示该项服务已禁用;disable属性被设为no,表示该项服务已启用。/etc/xinetd.conf有许多选项,下面是RHEL 4.0的/etc/xinetd.conf
# Simple configuration file for xinetd
# Some defaults, and include/etc/xinetd.d/
defaults
{
instances= 60
log_type= SYSLOG authpriv
log_on_success= HOST PID
log_on_failure= HOST
cps= 25 30
}
includedir/etc/xinetd.d
— instances= 60:表示最大连接进程数为60个。
— log_type= SYSLOG authpriv:表示使用syslog进行服务登记。
— log_on_success= HOST PID:表示设置成功后记录客户机的IP地址的进程ID。
— log_on_failure= HOST:表示设置失败后记录客户机的IP地址。
— cps= 25 30:表示每秒25个入站连接,如果超过限制,则等待30秒。主要用于对付拒绝服务攻击。
— includedir/etc/xinetd.d:表示告诉xinetd要包含的文件或目录是/etc/xinetd.d。
2)/etc/xinetd.d/*
下面以/etc/xinetd.d/中的一个文件(rsync)为例。
service rsync
{
disable= yes
socket_type= stream
wait= no
user= root
server=/usr/bin/rsync
log_on_failure+= USERID
}
下面说明每一行选项的含义:
— disable= yes:表示禁用这个服务。
— socket_type= stream:表示服务的数据包类型为stream。
— wait= no:表示不需等待,即服务将以多线程的方式运行。
— user= root:表示执行此服务进程的用户是root。
— server=/usr/bin/rsync:启动脚本的位置。
— log_on_failure+= USERID:表示设置失败时,UID添加到系统登记表。
6、配置xinetd
1)格式
/etc/xinetd.conf中的每一项具有下列形式:
service service-name
{
……
}
其中service是必需的关键字,且属性表必须用大括号括起来。每一项都定义了由service-name定义的服务。
service-name是任意的,但通常是标准网络服务名,也可增加其他非标准的服务,只要它们能通过网络请求激活,包括localhost自身发出的网络请求。有很多可以使用的属性,稍后将描述必需的属性和属性的使用规则。
操作符可以是=、+=或-=。所有属性可以使用=,其作用是分配一个或多个值,某些属性可以使用+=或-=,其作用分别是将其值增加到某个现存的值表中,或将其值从现存值表中删除。
2)配置文件
相关的配置文件如下:
/etc/xinetd.conf
/etc/xinetd.d/*//该目录下的所有文件
/etc/hosts.allow
/etc/hosts.deny
3) disabled与enabled
前者的参数是禁用的服务列表,后者的参数是启用的服务列表。他们的共同点是格式相同(属性名、服务名列表与服务中间用空格分开,例如disabled= in.tftpd in.rexecd),此外,它们都是作用于全局的。如果在disabled列表中被指定,那么无论包含在列表中的服务是否有配置文件和如何设置,都将被禁用;如果enabled列表被指定,那么只有列表中的服务才可启动,如果enabled没有被指定,那么disabled指定的服务之外的所有服务都可以启动。
4)注意问题
①在重新配置的时候,下列的属性不能被改变:socket_type、wait、protocol、type;
②如果only_from和no_access属性没有被指定(无论在服务项中直接指定还是通过默认项指定),那么对该服务的访问IP将没有限制;
③地址校验是针对IP地址而不是针对域名地址。
6 xinetd防止拒绝服务攻击(Denial of Services)的原因
xinetd能有效地防止拒绝服务攻击(Denial of Services)的原因如下。
1)限制同时运行的进程数
通过设置instances选项设定同时运行的并发进程数:
instances=20
当服务器被请求连接的进程数达到20个时,xinetd将停止接受多出部分的连接请求。直到请求连接数低于设定值为止。
2)限制一个IP地址的最大连接数
通过限制一个主机的最大连接数,从而防止某个主机独占某个服务。
per_source=5
这里每个IP地址的连接数是5个。
3)限制日志文件大小,防止磁盘空间被填满
许多攻击者知道大多数服务需要写入日志。入侵者可以构造大量的错误信息并发送出来,服务器记录这些错误,可能就造成日志文件非常庞大,甚至会塞满硬盘。同时会让管理员面对大量的日志,而不能发现入侵者真正的入侵途径。因此,限制日志文件大小是防范拒绝服务攻击的一个方法。
log_type FILE.1/var/log/myservice.log 8388608 15728640
这里设置的日志文件FILE.1临界值为8MB,到达此值时,syslog文件会出现告警,到达15MB,系统会停止所有使用这个日志系统的服务。
4)限制负载
xinetd还可以使用限制负载的方法防范拒绝服务攻击。用一个浮点数作为负载系数,当负载达到这个数目的时候,该服务将暂停处理后续的连接。
max_load= 2.8
上面的设定表示当一项系统负载达到2.8时,所有服务将暂时中止,直到系统负载下降到设定值以下。
说明要使用这个选项,编译时应加入“–with-loadavg”,xinetd将处理max-load配置选项,从而在系统负载过重时关闭某些服务进程,来实现防范某些拒绝服务攻击。
5)限制所有服务器数目(连接速率)
xinetd可以使用cps选项设定连接速率,下面的例子:
cps= 25 60
上面的设定表示服务器最多启动25个连接,如果达到这个数目将停止启动新服务60秒。在此期间不接受任何请求。
6)限制对硬件资源的利用
通过rlimit_as和rlimit_cpu两个选项可以有效地限制一种服务对内存、中央处理器的资源占用:
rlimit_as= 8M
rlimit_cpu=20
上面的设定表示对服务器硬件资源占用的限制,最多可用内存为8MB,CPU每秒处理20个进程。
xinetd的一个重要功能是它能够控制从属服务可以利用的资源量,通过它的以上设置可以达到这个目的,有助于防止某个xinetd服务占用大量资源,从而导致“拒绝服务”情况的出现。
Linux系统调用fsync函数详解
功能描述:
同步内存中所有已修改的文件数据到储存设备。
用法:
#include unistd.h
int fsync(int fd);
参数:
fd:文件描述词。
返回说明:
成功执行时,返回0。失败返回-1,errno被设为以下的某个值
EBADF:文件描述词无效
EIO:读写的过程中发生错误
EROFS, EINVAL:文件所在的文件系统不支持同步
强制把系统缓存写入文件sync和fsync函数,, fflush和fsync的联系和区别2010-05-10 11:25传统的U N I X实现在内核中设有缓冲存储器,大多数磁盘I/ O都通过缓存进行。当将数据写
到文件上时,通常该数据先由内核复制到缓存中,如果该缓存尚未写满,则并不将其排入输出
队列,而是等待其写满或者当内核需要重用该缓存以便存放其他磁盘块数据时,再将该缓存排
入输出队列,然后待其到达队首时,才进行实际的I/ O操作。这种输出方式被称之为延迟写
(delayed write)(Bach〔1 9 8 6〕第3章详细讨论了延迟写)。延迟写减少了磁盘读写次数,但是
第4章文件和目录8 7
下载
却降低了文件内容的更新速度,使得欲写到文件中的数据在一段时间内并没有写到磁盘上。当
系统发生故障时,这种延迟可能造成文件更新内容的丢失。为了保证磁盘上实际文件系统与缓
存中内容的一致性,U N I X系统提供了s y n c和f s y n c两个系统调用函数。
#include unistd.h
void sync(void);
int fsync(intf i l e d e s);
返回:若成功则为0,若出错则为-1
s y n c只是将所有修改过的块的缓存排入写队列,然后就返回,它并不等待实际I/ O操作结束。
系统精灵进程(通常称为u p d a t e)一般每隔3 0秒调用一次s y n c函数。这就保证了定期刷新内
核的块缓存。命令s y n c( 1)也调用s y n c函数。
函数f s y n c只引用单个文件(由文件描述符f i l e d e s指定),它等待I/ O结束,然后返回。f s y n c可
用于数据库这样的应用程序,它确保修改过的块立即写到磁盘上。比较一下f s y n c和O _ S Y N C标
志(见3. 1 3节)。当调用f s y n c时,它更新文件的内容,而对于O _ S Y N C,则每次对文件调用w r i t e
函数时就更新文件的内容。
fflush和fsync的联系和区别
[zz ]
1.提供者fflush是libc.a中提供的方法,fsync是系统提供的系统调用。2.原形fflush接受一个参数FILE*.fflush(FILE*);fsync接受的时一个Int型的文件描述符。fsync(int fd);3.功能fflush:是把C库中的缓冲调用write函数写到磁盘[其实是写到内核的缓冲区]。fsync:是把内核缓冲刷到磁盘上。
c库缓冲-----fflush---------〉内核缓冲--------fsync-----〉磁盘
再转一篇英文的
Write-back support
UBIFS supports write-back, which means that file changes do not go to the flash media straight away, but they are cached and go to the flash later, when it is absolutely necessary. This helps to greatly reduce the amount of I/O which results in better performance. Write-back caching is a standard technique which is used by most file systems like ext3 or XFS.
In contrast, JFFS2 does not have write-back support and all the JFFS2 file system changes go the flash synchronously. Well, this is not completely true and JFFS2 does have a small buffer of a NAND page size(if the underlying flash is NAND). This buffer contains last written data and is flushed once it is full. However, because the amount of cached data are very small, JFFS2 is very close to a synchronous file system.
Write-back support requires the application programmers to take extra care about synchronizing important files in time. Otherwise the files may corrupt or disappear in case of power-cuts, which happens very often in many embedded devices. Let's take a glimpse at Linux manual pages:
$ man 2 write
....
NOTES
A successful return from write() does not make any guarantee that data
has been committed to disk. In fact, on some buggy implementations, it
does not even guarantee that space has successfully been reserved for
the data. The only way to be sure is to call fsync(2) after you are
done writing all your data.
...
This is true for UBIFS(except of the"some buggy implementations" part, because UBIFS does reserves space for cached dirty data). This is also true for JFFS2, as well as for any other Linux file system.
However, some(perhaps not very good) user-space programmers do not take write-back into account. They do not read manual pages carefully. When such applications are used in embedded systems which run JFFS2- they work fine, because JFFS2 is almost synchronous. Of course, the applications are buggy, but they appear to work well enough with JFFS2. But the bugs show up when UBIFS is used instead. Please, be careful and check/test your applications with respect to power cut tolerance if you switch from JFFS2 to UBIFS. The following is a list of useful hints and advices.
If you want to switch into synchronous mode, use the-o sync option when mounting UBIFS; however, the file system performance will drop- be careful; Also remember that UBIFS mounted in synchronous mode provides less guarantees than JFFS2- refer this section for details.
Always keep in mind the above statement from the manual pages and run fsync() for all important files you change; of course, there is no need to synchronize"throw-away" temporary files; Just think how important is the file data and decide; and do not use fsync() unnecessarily, because this will hit the performance;
If you want to be more accurate, you may use fdatasync(), in which cases only data changes will be flushed, but not inode meta-data changes(e.g.,"mtime" or permissions); this might be more optimal than using fsync() if the synchronization is done often, e.g., in a loop; otherwise just stick with fsync();
In shell, the sync command may be used, but it synchronizes whole file system which might be not very optimal; and there is a similar libc sync() function;
You may use the O_SYNC flag of the open() call; this will make sure all the data(but not meta-data) changes go to the media before the write() operation returns; but in general, it is better to use fsync(), because O_SYNC makes each write to be synchronous, while fsync() allows to accumulate many writes and synchronize them at once;
It is possible to make certain inodes to be synchronous by default by setting the"sync" inode flag; in a shell, the chattr+S command may be used; in C programs, use the FS_IOC_SETFLAGS ioctl command; Note, the mkfs.ubifs tool checks for the"sync" flag in the original FS tree, so the synchronous files in the original FS tree will be synchronous in the resulting UBIFS image.
Let us stress that the above items are true for any Linux file system, including JFFS2.
fsync() may be called for directories- it synchronizes the directory inode meta-data. The"sync" flag may also be set for directories to make the directory inode synchronous. But the flag is inherited, which means all new children of this directory will also have this flag. New files and sub-directories of this directory will also be synchronous, and their children, and so forth. This feature is very useful if one needs to create a whole sub-tree of synchronous files and directories, or to make all new children of some directory to be synchronous by default(e.g.,/etc).
The fdatasync() call for directories is"no-op" in UBIFS and all UBIFS operations which change directory entries are synchronous. However, you should not assume this for portability(e.g., this is not true for ext2). Similarly, the"dirsync" inode flag has no effect in UBIFS.
The functions mentioned above work on file-descriptors, not on streams(FILE*). To synchronize a stream, you should first get its file descriptor using the fileno() libc function, then flush the stream using fflush(), and then synchronize the file using fsync() or fdatasync(). You may use other synchronization methods, but remember to flush the stream before synchronizing the file. The fflush() function flushes the libc-level buffers, while sync(), fsync(), etc flush kernel-level buffers.
Please, refer this FAQ entry for information about how to atomically update the contents of a file. Also, the Theodore Tso's article is a good reading.
Write-back knobs in Linux
Linux has several knobs in"/proc/sys/vm" which you may use to tune write-back. The knobs are global, so they affect all file-systems. Please, refer the"Documentation/sysctl/vm.txt" file fore more information. The file may be found in the Linux kernel source tree. Below are interesting knobs described in UBIFS context and in a simplified form.
dirty_writeback_centisecs- how often the Linux periodic write-back thread wakes up and writes out dirty data. This is a mechanism which makes sure all dirty data hits the media at some point.
dirty_expire_centisecs- dirty data expire period. This is maximum time data may stay dirty. After this period of time it will be written back by the Linux periodic write-back thread. IOW, the periodic write-back thread wakes up every"dirty_writeback_centisecs" centi-seconds and synchronizes data which was dirtied"dirty_expire_centisecs" centi-seconds ago.
dirty_background_ratio- maximum amount of dirty data in percent of total memory. When the amount of dirty data becomes larger, the periodic write-back thread starts synchronizing it until it becomes smaller. Even non-expired data will be synchronized. This may be used to set a"soft" limit for the amount of dirty data in the system.
dirty_ratio- maximum amount of dirty data at which writers will first synchronize the existing dirty data before adding more. IOW, this is a"hard" limit of the amount of dirty data in the system.
Note, UBIFS additionally has small write-buffers which are synchronized every 3-5 seconds. This means that most of the dirty data are delayed by dirty_expire_centisecs centi-seconds, but the last few KiB are additionally delayed by 3-5 seconds.
UBIFS write-buffer
UBIFS is asynchronous file-system(read this section for more information). As other Linux file-system, it utilizes the page cache. The page cache is a generic Linux memory-management mechanism. It may be very large and cache a lot of data. When you write to a file, the data are written to the page cache, marked as dirty, and the write returns(unless the file is synchronous). Later the data are written-back.
Write-buffer is an additional UBIFS buffer, which is implemented inside UBIFS, and it sits between the page cache and the flash. This means that write-back actually writes to the write-buffer, not directly to the flash.
The write-buffer is designated to speed-up UBIFS on NAND flashes. NAND flashes consist of NAND pages, which are usually 512, 2KiB or 4KiB in size. NAND page is the minimal read/write unit of NAND flash(see this section).
Write-buffer size is equivalent to NAND page size(so it is tiny comparing to the page cache). It's purpose is to accumulate small writes, and write full NAND pages instead of partially filled. Indeed, imagine we have to write 4 512-byte nodes with half a second interval, and NAND page size is 2KiB. Without write-buffer we would have to write 4 NAND pages and waste 6KiB of flash space, while write-buffer allows us to write only once and waste nothing. This means we write less, we create less dirty space so UBIFS garbage collector will have to do less work, we save power.
Well, the example shows an ideal situation, and even with the write-buffer we may waste space, for example in case of synchronous I/O, or if the data arrives with long time intervals. This is because the write-buffer has an associated timer, which flushes it every 3-5 seconds, even if it isn't full. We do this for data integrity reasons.
Of course, when UBIFS has to write a lot of data, it does not use write buffer. Only the last part of the data which is smaller than the NAND page ends up in the write-buffer and waits more for data, until it is flushed by the timer.
The write-buffer implementation is a little more complex, and we actually have several of them- one for each journal head. But this does not change the basic idea behind the write-buffer.
Few notes with regards to synchronization:
"sync()" also synchronizes all write-buffers;
"fsync(fd)" also synchronizes all write-buffers which contain pieces of"fd";
synchronous files, as well as files opened with"O_SYNC", bypass write-buffers, so the I/O is indeed synchronous for this files;
write-buffers are also bypassed if the file-system is mounted with the"-o sync" mount option.
Take into account that write-buffers delay the data synchronization timeout defined by"dirty_expire_centisecs"(see here) by 3-5 seconds. However, since write-buffers are small, only few data are delayed.
UBIFS in synchronous mode vs JFFS2
When UBIFS is mounted in synchronous mode(-o sync mount options)- all file system operations become synchronous. This means that all data are written to flash before the file-system operations return.
For example, if you write 10MiB of data to a file f.dat using the write() call, and UBIFS is in synchronous mode, then UBIFS guarantees that all 10MiB of data and the meta-data(file size and date changes) will reach the flash media before write() returns. And if a power cut happens after the write() call returns, the file will contain the written data.
The same is true for situations when f.dat has was opened with O_SYNC or has the sync flag(see man 2 chattr).
It is well-known that the JFFS2 file-system is synchronous(except a small write-buffer). However, UBIFS in synchronous mode is not the same as JFFS2 and provides somewhat less guarantees that JFFS2 does with respect to sudden power cuts.
In JFFS2 all the meta-data(like inode atime/mtime/ctime, inode size, UID/GID, etc) are stored in the data node headers. Data nodes carry 4KiB of(compressed) data. This means that the meta-data information is duplicated in many places, but this also means that every time JFFS2 writes a data node to the flash media, it updates inode size as well. So when JFFS2 mounts it scans the flash media, finds the latest data node, and fetches the inode size from there.
In practice this means that JFFS2 will write these 10MiB of data sequentially, from the beginning to the end. And if you have a power cut, you will just lose some amount of data at the end of the inode. For example, if JFFS2 starts writing those 10MiB of data, write 5MiB, and a power cut happens, you will end up with a 5MiB f.dat file. You lose only the last 5MiB.
Things are a little bit more complex in case of UBIFS, where data are stored in data nodes and meta-data are stored in(separate) inode nodes. The meta-data are not duplicated in each data node, like in JFFS2. UBIFS never writes data nodes beyond the on-flash inode size. If it has to write a data node and the data node is beyond the on-flash inode size(the in-memory inode has up-to-data size, but it is dirty and was not flushed yet), then UBIFS first writes the inode to the media, and then it starts writing the data. And if you have an interrupt, you lose data nodes and you have holes(or old data nodes, if you are overwriting). Lets consider an example.
User creates an empty file f.dat. The file is synchronous, or UBIFS is mounted in synchronous mode. User calls the write() function with a 10MiB buffer.
The kernel first copies all 10MiB of the data to the page cache. Inode size is changed to 10MiB as well and the inode is marked as dirty. Nothing has been written to the flash media so far. If a power cut happens at this point, the user will end up with an empty f.dat file.
UBIFS sees that the I/O has to be synchronous, and starts synchronizing the inode. First of all, it writes the inode node to the flash media. If a power cut happens at this moment, the user will end up with a 10MiB file which contains no data(hole), and if he read this file, he will get 10MiB of zeroes.
UBIFS starts writing the data. If a power cut happens at this point, the user will end up with a 10MiB file containing a hole at the end.
Note, if the I/O was not synchronous, UBIFS would skip the last step and would just return. And the actual write-back would then happen in back-ground. But power cuts during write-back could anyway lead to files with holes at the end.
Thus, synchronous I/O in UBIFS provides less guarantees than JFFS2 I/O- UBIFS has an effect of holes at the end of files. In ideal world applications should not assume anything about the contents of files which were not synchronized before a power-cut has happened. And"mainstream" file-systems like ext3 do not provide JFSS2-like guarantees.
However, UBIFS is sometimes used as a JFFS2 replacement and people may want it to behave the same way as JFFS2 if it is mounted synchronously. This is doable, but needs some non-trivial development, so this was not implemented so far. On the other hand, there was no strong demand. You may implement this as an exercise, or you may try to convince UBIFS authors to do this.
Synchronization exceptions for buggy applications
As this section describes, UBIFS is an asynchronous file-system, and applications should synchronize their files whenever it is required. The same applies to most Linux file-systems, e.g. XFS.
However, many applications ignore this and do not synchronize files properly. And there was a huge war between user-space and kernel developers related to ext4 delayed allocation feature. Please, see the Theodore Tso's blog post. More information may be found in this LWN article.
In short, the flame war was about 2 cases. The first case was about the atomic re-name, where many user-space programs did not synchronize the copy before re-naming it. The second case was about applications which truncate files, then change them. There was no final agreement, but the"we cannot ignore the real world" argument found ext4 developers' understanding, and there were 2 ext4 changes which help both problems.
Roughly speaking, the first change made ext4 synchronize files on close if they were previously truncated. This was a hack from file-system point of view, but it"fixed" applications which truncate files, write new contents, and close the files without synchronizing them.
The second change made ext4 synchronize the renamed file.
Well, this is not exactly correct description, because ext4 does not write the files synchronously, but actually initiates asynchronous write-out of the files, so the performance hit is not very high. For the truncation case this means that the file is synchronized soon after it is closed. For the re-name case this means that ext4 writes data before it writes the re-name meta-data.
However, the application writers should never rely on these things, because this is not portable. Instead, they should properly synchronize files. The ext4 fixes were because there were many broken user-space applications in the wild already.
We have plans to implement these features in UBIFS, but this has not been done yet. The problem is that UBI/MTD are fully synchronous and we cannot initiate asynchronous write-out, so we'd have to synchronously write files on close/rename, which is slow. So implementing these features would require implementing asynchronous I/O in UBI, which is a big job. But feel free to do this:-).