The answer is "depends." In general, neither client is very demanding. And how much bandwidth they are taking up depends on what they are transferring at any given moment.
From moment to moment, they are not transferring anything. When a user is entering text into telnet, there are short bursts of transfer on the order of a few bytes at a time. The most stress a telnet session might place on a network link would be while dumping data to the console at the remote end. Then it is limited to whatever the end-to-end path allows where every factor from the TCP/IP stack, to the NIC and drivers, to the bandwidth bottleneck (lowest transfer rate supported on the path) to the configuration on the remote server, must be considered.
An HTML browser like IE is much the same: It does nothing until requests are made. And then it depends on what is requested. For the most part it doesn?t take any bandwidth as it idles. But let's assume the worst case where the user has requested a page with a very large amount of data or large file. Then once again, the end-to-end path will define the maximum supported.
So the answer is still "depends." I will guess that you are trying to estimate the typical bandwidth consumption based on the number of clients employed by your users. Just for the sake of conversation, I would estimate an average of, at most, a few kbps per active client. At peak rates for very large transfers, I would estimate about 200 kbps per active client based on the typical max allowed transfer rates supported by servers - but only a small fraction of your population will be doing peak transfers at any given moment. So you can't just multiple clients to peak rate - you must also factor in percentage of time operating at that capacity - which is likely much less than 1%.
This was first published in March 2003